EUR 3,47
Cantidad disponible: 1 disponibles
Añadir al carritoCondición: very good. Gut/Very good: Buch bzw. Schutzumschlag mit wenigen Gebrauchsspuren an Einband, Schutzumschlag oder Seiten. / Describes a book or dust jacket that does show some signs of wear on either the binding, dust jacket or pages.
Librería: medimops, Berlin, Alemania
EUR 4,47
Cantidad disponible: 1 disponibles
Añadir al carritoCondición: very good. Gut/Very good: Buch bzw. Schutzumschlag mit wenigen Gebrauchsspuren an Einband, Schutzumschlag oder Seiten. / Describes a book or dust jacket that does show some signs of wear on either the binding, dust jacket or pages.
Publicado por Mainz: Verlag der Gutenberg-Gesellschaft (= Kleiner Druck Nr. 94 der G.-G.), 1973
Librería: Roland Antiquariat UG haftungsbeschränkt, Weinheim, Alemania
EUR 5,50
Cantidad disponible: 1 disponibles
Añadir al carritoEngl. Broschur. 40 S. mit Abb. - 23,5 x 15,5. Einband etwas berieben, sonst gut erhalten. Sprache: Deutsch Gewicht in Gramm: 124.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 40,03
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: New. The need for artificial intelligence systems that are not only capable of mastering complicated tasks but also of explaining their decisions has massively gained attention over the last years. This also seems to offer opportunities for further interconnecting different approaches to artificial intelligence, such as machine learning and knowledge representation.This work considers the task of learning knowledge bases from agent behavior, with a focus on human-readability, comprehensibility and applications in games. In this context, it will be presented how knowledge can be organized and processed on multiple levels of abstraction, allowing for efficient reasoning and revision. It will be investigated how learning agents can benefit from incorporating the approaches into their learning processes.Examples and applications are provided, e.g., in the context of general video game playing. The most essential approaches are implemented in the InteKRator toolbox and show potential for being applied in other domains (e.g., in medical informatics).
Librería: PBShop.store US, Wood Dale, IL, Estados Unidos de America
EUR 40,08
Cantidad disponible: 2 disponibles
Añadir al carritoPAP. Condición: New. New Book. Shipped from UK. Established seller since 2000.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,33
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. Big data raises new opportunities for deep insights and supporting decision-making. To seize these opportunities, methods that derive useful knowledge from large amounts of data are needed. Such methods can help meet urgent challenges in many fields. An urgent challenge for energy systems is the necessary transformation towards sustainability to mitigate climate change. One crucial aspect of this challenge is a permanent optimal operation of energy systems. In principle, mathematical optimization can best determine the optimal operation of energy systems. However, manual model generation and operational optimization of energy systems are time-consuming and can thus prevent an application of mathematical optimization in practice. This thesis presents methods that use measured data to automatically generate mathematical models of energy systems to tackle the challenge of time-consuming model generations. Additionally, methods are presented that accelerate the operational optimization of energy systems. Regarding model generation, the presented data-driven methods solve the trade-off between accuracy and computational efficiency of the energy system model by weighting each component model by its role in the overall system. Thereby, the methods automatically generate energy system models that allow for accurate and computationally efficient optimization.To accelerate the operational optimization of energy systems, we present two methods that decompose the complex operational optimization problem into smaller subproblems. The methods provide high-quality solutions. The first method employs expert knowledge about the individual energy system to significantly accelerate the operational optimization while retaining an excellent solution quality.The second method applies artificial neural nets to solve the operational optimization in a reliably short time while offering a high solution quality. Overall, the methods presented in this thesis enable a broader application of mathematical optimization for energy systems.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,33
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Big data raises new opportunities for deep insights and supporting decision-making. To seize these opportunities, methods that derive useful knowledge from large amounts of data are needed. Such methods can help meet urgent challenges in many fields. An urgent challenge for energy systems is the necessary transformation towards sustainability to mitigate climate change. One crucial aspect of this challenge is a permanent optimal operation of energy systems. In principle, mathematical optimization can best determine the optimal operation of energy systems. However, manual model generation and operational optimization of energy systems are time-consuming and can thus prevent an application of mathematical optimization in practice. This thesis presents methods that use measured data to automatically generate mathematical models of energy systems to tackle the challenge of time-consuming model generations. Additionally, methods are presented that accelerate the operational optimization of energy systems. Regarding model generation, the presented data-driven methods solve the trade-off between accuracy and computational efficiency of the energy system model by weighting each component model by its role in the overall system. Thereby, the methods automatically generate energy system models that allow for accurate and computationally efficient optimization.To accelerate the operational optimization of energy systems, we present two methods that decompose the complex operational optimization problem into smaller subproblems. The methods provide high-quality solutions. The first method employs expert knowledge about the individual energy system to significantly accelerate the operational optimization while retaining an excellent solution quality.The second method applies artificial neural nets to solve the operational optimization in a reliably short time while offering a high solution quality. Overall, the methods presented in this thesis enable a broader application of mathematical optimization for energy systems. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,41
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. Thermal energy storage (TES) helps to reduce energy consumption and peak demands by balancing heat supply and demand on all time scales from short-term to seasonal. Thus, TES is an important technology to improve flexibility and efficiency of energy systems. In particular, adsorption TES systems, which exploit the enthalpy of adsorption, provide high energy storage density and high efficiency.The present thesis therefore analyzes an adsorption TES unit for residential and industrial applications. Industrial energy supply can be made more efficient by integrating waste heat into the process heat supply and by using energy-efficient technologies. Adsorption TES contributes to both approaches: waste heat can be integrated via the heat pump effect and TES allows for energy-efficient cogeneration heat supply for batch processes.We evaluate the energy efficiency of the heat supply for an industrial batch process by adsorption TES and cogeneration. To evaluate the performance, a dynamic model of an adsorption TES unit is developed. Measurements from earlier experimental investigations of an adsorption TES unit are used to calibrate the storage model. As benchmark, a peak boiler and TES based on a phase-change material are considered. Our comparison demonstrates the significance of the process conditions for the choice of the appropriate technology. The study shows that adsorption TES offers significant potential to increase the energy efficiency: primary energy consumption can be reduced by up to 25%. The key is the availability of low-grade heat at times of discharging and of a low-grade heat demand when charging the storage unit.The study reveals that a comprehensive evaluation of the storage performance requires dynamic models that precisely describe the storage performance and the heat losses in particular. The present thesis provides the basis with a new experimental setup to precisely characterize the adsorption TES unit. In an experimental analysis of the TES performance, we quantify the heat losses, the energy recovery ratio (69-91%) and the energy storage density (20.4 kWh/m3) of the adsorption TES unit for varying charging temperatures and storage times ranging from continuous operation to seasonal storage.The extensive experimental study provides the basis to improve our model of the adsorption TES unit. The model is calibrated to heat-loss measurements and a storagecycle measurement. We quantify the simulation accuracy and validate the model with measurements at various process conditions. The model achieves a higher prediction accuracy than other models from literature. The thesis thus provides a basis for future investigations of energy systems to exploit the advantages of adsorption TES.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,41
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. This thesis studies the congestion problem at the Medium Access Control (MAC) layer of the IEEE 802.11p system in highway scenarios using an analytical approach. Using the Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) protocol, the IEEE 802.11p system suffers from the hidden station problem. This thesis provides formal definitions for the hidden station condition and the hidden station problem and develops an analytical methodology for CSMA protocols to investigate the interaction between stations in mutual channel sensing range and hidden to each other. Analytical models are developed for broadcast communication using the generic CSMA protocol and the IEEE 802.11p MAC protocol with a one-dimensional (1-D) homogeneous network topology. Simulation studies prove the accuracy of the models in analyzing the reliability performance and the efficiency performance of CSMA broadcast communication with hidden stations.The performance of Cooperative Awareness Message (CAM) in IEEE 802.11p network is analyzed for highway scenarios using the developed analytical models. The study reveals that in a hidden station scenario the reliability performance of the CAM broadcast communication deteriorates with increased topological distance between the transmitter and the receiver. This study provides quantitative analysis of this performance with respect to network density, frame length, traffic load and settings of the IEEE 802.11p MAC protocol. Analysis of the mean update interval of CAM at a receiver vehicle discovers though in general the performance degrades with increased network density, the update interval of CAM frames from a particular vehicle in the vicinity of the receiver, e.g. with a topological distance less than 8 between the transmitter and the receiver, can be easily maintained below 1 second by using control mechanisms like transmit power control, transmit rate control and link control. The analytical models developed in this work provide quantitative guidance on optimizing protocol parameters and utilization of these control mechanisms.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,42
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. During the last 25 years, most of the communication systems have been converted to purely digital technology, although the transmitted content mostly is analog by nature. The principal advantages of digital communication are compression by source encoding, bit error protection by channel coding and robust transmission over noisy channels by appropriate modulation. Digital systems are usually designed for worst case channel conditions. However, often the channel quality is much better, which is not reflected in an improved end-to-end transmission quality due to inevitable quantization noise produced by the source encoder.In this thesis, the focus is set on systems which:. are not purely digital anymore. benefit from increasing channel qualities and. avoid the saturation effect using discrete-time, continuous-amplitude techniques.In the first part, purely analog, i.e., discrete-time and continuous-amplitude transmission systems are considered - with linear or nonlinear components. Theoretical performance limits are discussed and a new rate-distortion upper bound is derived which can be evaluated semi-analytically. The performance of linear transmission systems is derived analytically while simulations assess several nonlinear systems including the famous Archimedes spiral. A new class of nonlinear discrete-time analog coding systems, i.e., the Analog Modulo Block Codes (AMBCs) is developed. One important observation is that nonlinear discrete-time, continuous-amplitude systems can be decomposed into a discrete and a continuous-amplitudes part. The considered systems exhibit a considerable gap to capacity but they all circumvent the saturation effect due to the continuous-amplitude components.In the main part of the thesis, these findings are turned into a design principle. Hybrid Digital-Analog (HDA) transmission systems consist of separate digital and analog branches while each is constructed independently. By combining both worlds - digital and continuous-amplitude transmission - new concepts emerge which fuse their advantages: capacity achieving performance in the digital branch with a huge variety of conventional digital codes plus avoiding the saturation effect in the analog branch. The performance of HDA systems is assessed theoretically as well as by simulations. These Hybrid Digital-Analog (HDA) transmission systems outperform both purely digital and continuous-amplitude concepts. HDA transmission is an attractive solution for wireless systems such as microphones, loudspeakers or distributed sensors.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,42
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Thermal energy storage (TES) helps to reduce energy consumption and peak demands by balancing heat supply and demand on all time scales from short-term to seasonal. Thus, TES is an important technology to improve flexibility and efficiency of energy systems. In particular, adsorption TES systems, which exploit the enthalpy of adsorption, provide high energy storage density and high efficiency.The present thesis therefore analyzes an adsorption TES unit for residential and industrial applications. Industrial energy supply can be made more efficient by integrating waste heat into the process heat supply and by using energy-efficient technologies. Adsorption TES contributes to both approaches: waste heat can be integrated via the heat pump effect and TES allows for energy-efficient cogeneration heat supply for batch processes.We evaluate the energy efficiency of the heat supply for an industrial batch process by adsorption TES and cogeneration. To evaluate the performance, a dynamic model of an adsorption TES unit is developed. Measurements from earlier experimental investigations of an adsorption TES unit are used to calibrate the storage model. As benchmark, a peak boiler and TES based on a phase-change material are considered. Our comparison demonstrates the significance of the process conditions for the choice of the appropriate technology. The study shows that adsorption TES offers significant potential to increase the energy efficiency: primary energy consumption can be reduced by up to 25%. The key is the availability of low-grade heat at times of discharging and of a low-grade heat demand when charging the storage unit.The study reveals that a comprehensive evaluation of the storage performance requires dynamic models that precisely describe the storage performance and the heat losses in particular. The present thesis provides the basis with a new experimental setup to precisely characterize the adsorption TES unit. In an experimental analysis of the TES performance, we quantify the heat losses, the energy recovery ratio (6991%) and the energy storage density (20.4 kWh/m3) of the adsorption TES unit for varying charging temperatures and storage times ranging from continuous operation to seasonal storage.The extensive experimental study provides the basis to improve our model of the adsorption TES unit. The model is calibrated to heat-loss measurements and a storagecycle measurement. We quantify the simulation accuracy and validate the model with measurements at various process conditions. The model achieves a higher prediction accuracy than other models from literature. The thesis thus provides a basis for future investigations of energy systems to exploit the advantages of adsorption TES. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,42
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. During the last 25 years, most of the communication systems have been converted to purely digital technology, although the transmitted content mostly is analog by nature. The principal advantages of digital communication are compression by source encoding, bit error protection by channel coding and robust transmission over noisy channels by appropriate modulation. Digital systems are usually designed for worst case channel conditions. However, often the channel quality is much better, which is not reflected in an improved end-to-end transmission quality due to inevitable quantization noise produced by the source encoder.In this thesis, the focus is set on systems which: are not purely digital anymore benefit from increasing channel qualities and avoid the saturation effect using discrete-time, continuous-amplitude techniques.In the first part, purely analog, i.e., discrete-time and continuous-amplitude transmission systems are considered with linear or nonlinear components. Theoretical performance limits are discussed and a new rate-distortion upper bound is derived which can be evaluated semi-analytically. The performance of linear transmission systems is derived analytically while simulations assess several nonlinear systems including the famous Archimedes spiral. A new class of nonlinear discrete-time analog coding systems, i.e., the Analog Modulo Block Codes (AMBCs) is developed. One important observation is that nonlinear discrete-time, continuous-amplitude systems can be decomposed into a discrete and a continuous-amplitudes part. The considered systems exhibit a considerable gap to capacity but they all circumvent the saturation effect due to the continuous-amplitude components.In the main part of the thesis, these findings are turned into a design principle. Hybrid Digital-Analog (HDA) transmission systems consist of separate digital and analog branches while each is constructed independently. By combining both worlds digital and continuous-amplitude transmission new concepts emerge which fuse their advantages: capacity achieving performance in the digital branch with a huge variety of conventional digital codes plus avoiding the saturation effect in the analog branch. The performance of HDA systems is assessed theoretically as well as by simulations. These Hybrid Digital-Analog (HDA) transmission systems outperform both purely digital and continuous-amplitude concepts. HDA transmission is an attractive solution for wireless systems such as microphones, loudspeakers or distributed sensors. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,43
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. In 2007 the International Telecommunication Union, Radiocommunication Sector (ITU-R) published evaluation guidelines for future mobile broadband radio networks including a minimal VoIP capacity that the IEEE 802.16m WiMAX system had to meet, proven by system level simulation. These evaluation guidelines are the foundation of this work that shows that relayenhanced 4G networks do not only meet the requirements but exceed them by far. The key-requirement for packet based VoIP services that has been set in the evaluation guidelines demands that the end-to-end packet delay of at least 98% of user data traffic stays below 50 ms.Since 2011 proposals of ComNets to implement relays into mobile radio networks to increase system capacity are part of all 4G systems (LTE and WiMAX). This work investigates the voice over IP (VoIP) capacity of the relay-enhanced WiMAX system by system simulation. The work presents a modular approach that has been developed by my colleagues and myself to implement the WiMAX protocol stack, also known as WiMAC, into the openWNS simulator. The approach is based on atomic protocol functions that are bound to so called FU and interconnected in a network of functional units (FUs) and thereby implement the WiMAX protocol bit-by-bit. Besides the performance evaluation of VoIP services based on the ITU-R scenarios and channel model, this work also determines the optimal radio resource configuration for up- and downlink and furthermore investigates the impact of RS location on system capacity. Core functions of the simulator are being validated by comparison of equivalent results of other simulators.Cumulative distribution functions of the end-to-end delay of VoIP user data packets, determined by simulation show that relays double the carried VoIP traffic load compared to a scenario without relay stations (RSs) due to the improved prediction of the uplink channel state that is the bottleneck of the system.The performance enhancement is a surprise since relays increase the end-to-end packet delay and packet error due to an additional transmission compared to conventional single-hop operation.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,43
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. This thesis studies the congestion problem at the Medium Access Control (MAC) layer of the IEEE 802.11p system in highway scenarios using an analytical approach. Using the Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) protocol, the IEEE 802.11p system suffers from the hidden station problem. This thesis provides formal definitions for the hidden station condition and the hidden station problem and develops an analytical methodology for CSMA protocols to investigate the interaction between stations in mutual channel sensing range and hidden to each other. Analytical models are developed for broadcast communication using the generic CSMA protocol and the IEEE 802.11p MAC protocol with a one-dimensional (1-D) homogeneous network topology. Simulation studies prove the accuracy of the models in analyzing the reliability performance and the efficiency performance of CSMA broadcast communication with hidden stations.The performance of Cooperative Awareness Message (CAM) in IEEE 802.11p network is analyzed for highway scenarios using the developed analytical models. The study reveals that in a hidden station scenario the reliability performance of the CAM broadcast communication deteriorates with increased topological distance between the transmitter and the receiver. This study provides quantitative analysis of this performance with respect to network density, frame length, traffic load and settings of the IEEE 802.11p MAC protocol. Analysis of the mean update interval of CAM at a receiver vehicle discovers though in general the performance degrades with increased network density, the update interval of CAM frames from a particular vehicle in the vicinity of the receiver, e.g. with a topological distance less than 8 between the transmitter and the receiver, can be easily maintained below 1 second by using control mechanisms like transmit power control, transmit rate control and link control. The analytical models developed in this work provide quantitative guidance on optimizing protocol parameters and utilization of these control mechanisms. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,43
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. In 2007 the International Telecommunication Union, Radiocommunication Sector (ITU-R) published evaluation guidelines for future mobile broadband radio networks including a minimal VoIP capacity that the IEEE 802.16m WiMAX system had to meet, proven by system level simulation. These evaluation guidelines are the foundation of this work that shows that relayenhanced 4G networks do not only meet the requirements but exceed them by far. The key-requirement for packet based VoIP services that has been set in the evaluation guidelines demands that the end-to-end packet delay of at least 98% of user data traffic stays below 50 ms.Since 2011 proposals of ComNets to implement relays into mobile radio networks to increase system capacity are part of all 4G systems (LTE and WiMAX). This work investigates the voice over IP (VoIP) capacity of the relay-enhanced WiMAX system by system simulation. The work presents a modular approach that has been developed by my colleagues and myself to implement the WiMAX protocol stack, also known as WiMAC, into the openWNS simulator. The approach is based on atomic protocol functions that are bound to so called FU and interconnected in a network of functional units (FUs) and thereby implement the WiMAX protocol bit-by-bit. Besides the performance evaluation of VoIP services based on the ITU-R scenarios and channel model, this work also determines the optimal radio resource configuration for up- and downlink and furthermore investigates the impact of RS location on system capacity. Core functions of the simulator are being validated by comparison of equivalent results of other simulators.Cumulative distribution functions of the end-to-end delay of VoIP user data packets, determined by simulation show that relays double the carried VoIP traffic load compared to a scenario without relay stations (RSs) due to the improved prediction of the uplink channel state that is the bottleneck of the system.The performance enhancement is a surprise since relays increase the end-to-end packet delay and packet error due to an additional transmission compared to conventional single-hop operation. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: PBShop.store UK, Fairford, GLOS, Reino Unido
EUR 38,83
Cantidad disponible: 2 disponibles
Añadir al carritoPAP. Condición: New. New Book. Shipped from UK. Established seller since 2000.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,51
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. In recent years, the conversion of CO2 to basic chemicals with one carbon atom (C1-chemicals) such as methane and methanol has gained increasing interest. The major motivation for the utilization of CO2 is the reduction of global warming and fossil depletion impacts. However, these reductions are not guaranteed because all C1-chemicals require hydrogen besides the abundantly available CO2. Thus, the goal of this thesis is the life cycle assessment of CO2-based C1-chemicals (methane, methanol, carbon monoxide and formic acid). The assessment is based on a system-wide perspective, which means that for limited resources such as renewable electricity also the utilization of the limited resources is in other processes is considered.First of all, the CO2-based processes are compared to fossil-based processes for C1-chemicals. Formic acid has the highest potential to reduce global warming and fossil depletion impacts followed by carbon monoxide, methanol and methane. Even if hydrogen is supplied by fossil-based steam reforming, formic acid reduces global warming and fossil depletion impacts. All other CO2-based C1-chemicals require hydrogen from electrolysis using renewable electricity.In the following, the supply of hydrogen by electrolysis is analyzed in more detail. The CO2-based processes for carbon monoxide and methane required about 60 % and 88 % renewable electricity (in 2020 in the EU-27) to reduce global warming impacts compared to the fossil-based processes.If 100 % renewable electricity is used, all CO2-based C1-chemicals reduce global warming and fossil depletion impacts compared to the fossil-based processes. For the assessment of these reductions, also alternative utilization options for renewable electricity (Power-to-X) are analyzed such as electricity storage systems, battery electric vehicles and heat pumps. The highest reductions per electricity used are achieved for heat pumps followed by battery electric vehicles and electricity storage systems. Then, the CO2-based C1-chemicals follow.Since renewable electricity is used more efficiently outside the chemical industry, also biomass-based methane and methanol are analyzed. The utilization of biomass achieves the highest reductions if coal-fired power plants are substituted followed by the production of methanol. For methanol and methane, the yield per biomass can be increased if additional hydrogen is used.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,51
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Smoking chimneys are a symbol for environmental impacts of industrial processes. Indeed, industrial processes are major contributors to environmental problems such as global warming. Beyond emission-related problems, industrial processes deplete limited resources because they require raw materials. Raw materials are directly linked to costs, emission-related impacts cause indirect expenditures, e.g., through the European emissions trading scheme (EU-ETS) for greenhouse gas emissions. Therefore, industrial enterprises seek to reduce costs by reducing environmental impacts of their processes.Two well-known strategies for reducing environmental impacts of industrial processes are process integration and recycling. Process integration establishes interconnections between formerly separate processes by utilizing co-products. Process integration thereby relies on unit processes with multiple products, so-called multi-product processes. Similarly, recycling uses waste as raw material for new products. But neither process integration for recycling guarantee reduced environmental impacts. E.g., recycling may cause more impacts than waste disposal. Decision makers thus need a holistic method for comparing environmental impacts from multi-product processes.This work investigates methods for comparisons of industrial multi-product processes. The environmental impacts of multi-product processes can be analyzed using life cycle assessment (LCA). LCA studies all environmental impacts of all processes involved in a procucts entire life cycle. Due to ist holistic approach, LCA identifies shifting of environmental problems between processes and between different types of environmental impacts. Results of LCA-studies can thus help avoiding such problem shifting. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,52
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. Smoking chimneys are a symbol for environmental impacts of industrial processes. Indeed, industrial processes are major contributors to environmental problems such as global warming. Beyond emission-related problems, industrial processes deplete limited resources because they require raw materials. Raw materials are directly linked to costs, emission-related impacts cause indirect expenditures, e.g., through the European emissions trading scheme (EU-ETS) for greenhouse gas emissions. Therefore, industrial enterprises seek to reduce costs by reducing environmental impacts of their processes.Two well-known strategies for reducing environmental impacts of industrial processes are process integration and recycling. Process integration establishes interconnections between formerly separate processes by utilizing co-products. Process integration thereby relies on unit processes with multiple products, so-called multi-product processes. Similarly, recycling uses waste as raw material for new products. But neither process integration for recycling guarantee reduced environmental impacts. E.g., recycling may cause more impacts than waste disposal. Decision makers thus need a holistic method for comparing environmental impacts from multi-product processes.This work investigates methods for comparisons of industrial multi-product processes. The environmental impacts of multi-product processes can be analyzed using life cycle assessment (LCA). LCA studies all environmental impacts of all processes involved in a procuct's entire life cycle. Due to ist holistic approach, LCA identifies shifting of environmental problems between processes and between different types of environmental impacts. Results of LCA-studies can thus help avoiding such problem shifting.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,53
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. In recent years, the conversion of CO2 to basic chemicals with one carbon atom (C1-chemicals) such as methane and methanol has gained increasing interest. The major motivation for the utilization of CO2 is the reduction of global warming and fossil depletion impacts. However, these reductions are not guaranteed because all C1-chemicals require hydrogen besides the abundantly available CO2. Thus, the goal of this thesis is the life cycle assessment of CO2-based C1-chemicals (methane, methanol, carbon monoxide and formic acid). The assessment is based on a system-wide perspective, which means that for limited resources such as renewable electricity also the utilization of the limited resources is in other processes is considered.First of all, the CO2-based processes are compared to fossil-based processes for C1-chemicals. Formic acid has the highest potential to reduce global warming and fossil depletion impacts followed by carbon monoxide, methanol and methane. Even if hydrogen is supplied by fossil-based steam reforming, formic acid reduces global warming and fossil depletion impacts. All other CO2-based C1-chemicals require hydrogen from electrolysis using renewable electricity.In the following, the supply of hydrogen by electrolysis is analyzed in more detail. The CO2-based processes for carbon monoxide and methane required about 60 % and 88 % renewable electricity (in 2020 in the EU-27) to reduce global warming impacts compared to the fossil-based processes.If 100 % renewable electricity is used, all CO2-based C1-chemicals reduce global warming and fossil depletion impacts compared to the fossil-based processes. For the assessment of these reductions, also alternative utilization options for renewable electricity (Power-to-X) are analyzed such as electricity storage systems, battery electric vehicles and heat pumps. The highest reductions per electricity used are achieved for heat pumps followed by battery electric vehicles and electricity storage systems. Then, the CO2-based C1-chemicals follow.Since renewable electricity is used more efficiently outside the chemical industry, also biomass-based methane and methanol are analyzed. The utilization of biomass achieves the highest reductions if coal-fired power plants are substituted followed by the production of methanol. For methanol and methane, the yield per biomass can be increased if additional hydrogen is used. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,60
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. In recent years, spectroscopy has developed into an increasingly valuable tool to determine the composition of mixtures; for scientific questions as well as for the industry. The increasing use of spectroscopy raises the question how to best use the obtained data. For the analysis of spectral data, the method of Indirect Hard Modeling (IHM) has been established besides statistical methods like PLS. IHM is a nonlinear method that can therefore efficiently treat nonlinear effects such as peak-shifts. In the present work, the IHM method is expanded to increase its applicability.IHM treats nonlinear effects in the spectral evaluation. Therefore, the direct proportionality between the concentration and the Raman signal of a component can be used for calibration. The resulting linear calibration model allows for reliable extrapolation. Thus, IHM can be used to study reactive systems, even if only binary subsystems can be used for calibration. However, thermodynamic systems with intermediates can so far only be calibrated by using thermodynamic models. In this work, a method is established that calibrates a reactive system with intermediates only based on the reaction mechanism as well as stoichiometry and electroneutrality.Spectral backgrounds, e.g., fluorescence, can be treated by a spectral pretreatment or via background models. However, spectral backgrounds are still a common source of error in IHM. Derivatives have long been used very effectively in statistical methods. Therefore, IHM is adapted so that it becomes possible to evaluate the first derivative of spectra.The calibration of IHM is mostly limited to the relative spectral intensities of the involved components. In the present work, a method is presented that uses the information in the calibration spectra more thoroughly. For this purpose, nonlinear effects are parametrized as a function of concentration.The commonly used peak profiles do not reflect the physical reality at a detector very well. As a result, narrow modelled peaks may change their apparent intensity if they are shifted. To correct these shortcomings, a new peak model is proposed in this work that is more closely aligned to the physical reality of a detector.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,62
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. In recent years, spectroscopy has developed into an increasingly valuable tool to determine the composition of mixtures; for scientific questions as well as for the industry. The increasing use of spectroscopy raises the question how to best use the obtained data. For the analysis of spectral data, the method of Indirect Hard Modeling (IHM) has been established besides statistical methods like PLS. IHM is a nonlinear method that can therefore efficiently treat nonlinear effects such as peak-shifts. In the present work, the IHM method is expanded to increase its applicability.IHM treats nonlinear effects in the spectral evaluation. Therefore, the direct proportionality between the concentration and the Raman signal of a component can be used for calibration. The resulting linear calibration model allows for reliable extrapolation. Thus, IHM can be used to study reactive systems, even if only binary subsystems can be used for calibration. However, thermodynamic systems with intermediates can so far only be calibrated by using thermodynamic models. In this work, a method is established that calibrates a reactive system with intermediates only based on the reaction mechanism as well as stoichiometry and electroneutrality.Spectral backgrounds, e.g., fluorescence, can be treated by a spectral pretreatment or via background models. However, spectral backgrounds are still a common source of error in IHM. Derivatives have long been used very effectively in statistical methods. Therefore, IHM is adapted so that it becomes possible to evaluate the first derivative of spectra.The calibration of IHM is mostly limited to the relative spectral intensities of the involved components. In the present work, a method is presented that uses the information in the calibration spectra more thoroughly. For this purpose, nonlinear effects are parametrized as a function of concentration.The commonly used peak profiles do not reflect the physical reality at a detector very well. As a result, narrow modelled peaks may change their apparent intensity if they are shifted. To correct these shortcomings, a new peak model is proposed in this work that is more closely aligned to the physical reality of a detector. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,67
Cantidad disponible: 3 disponibles
Añadir al carritoPaperback. Condición: New. One of the most important tools in the metal forming technology is the integrated process and microstructual simulation using Finite Element Methods (FEM). It has become more and more popular in recent years, especially in the segment of hot metal forming, e.g. open die forging of large scaled and hardly deformable materials, such as nickel-based super alloys for turbine shaft. Theoretically using this method it is possible to calculate the microstructual evolution along the whole process chain in the numerical simulation of the considered metal forming process. Based on this knowledge a series of benefits can be achieved for the practice, such as to optimize a present metal forming process, to predict the mechanical properties of the final products under the given forming conditions, to detect the possible product failures prematurely, to assist the design of a new production chain and so on. In the face of these trends of the scientific research as well as the industrial demands, different material models have been released in the market to combine the commercial FEM programs specialized in the numerical simulation of metal forming. Among others microstructure-based ow stress models show outstanding performance. Through this kind of material model not only the microstructure such as recrystallization and grain size, but also the interaction between the microstructual evolution and the working hardening, effectively ow stress, can be quantitatively represented.Towards accurate and efficient material modeling, the model parameters have to be determined conveniently and reliably. For this propose a new Hybrid strategy combining the advantages both of direct and indirect methods has been proposed using the example of StrucSim, which is a very good representative of a mircostructure-based ow stress model. At first different aspects, which lead to the disadvantages of the conventional method, i.e. direct method, were discussed. In doing so a high manganese steel was characterized as an example by stepwise graphical and regression analysis. It was found that, the precondition of direct methods, namely recording ow curves under constant Zener-hollomon-parameter conditions, are basically not possible due to both limitations of test equipment and unconquerable physical mechanism like dissipation heating. The common solution to compensate these factors may lead to further inaccuracies, uncertainties and complexities despite large testing and evaluating efforts. Further in order to improve the model quality calibrated by the conventional direct method an efficient hybrid strategy has been derived by combining inverse analysis with offline calculation of ow stress and microstructure. Three different variations of the hybrid strategy were introduced to deal with different available experimental databases, such as isothermal and non-isothermal ow curves. To demonstrate the developed routines of these three hybrid possibilities two kinds of materials including.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,68
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. One of the most important tools in the metal forming technology is the integrated process and microstructual simulation using Finite Element Methods (FEM). It has become more and more popular in recent years, especially in the segment of hot metal forming, e.g. open die forging of large scaled and hardly deformable materials, such as nickel-based super alloys for turbine shaft. Theoretically using this method it is possible to calculate the microstructual evolution along the whole process chain in the numerical simulation of the considered metal forming process. Based on this knowledge a series of benefits can be achieved for the practice, such as to optimize a present metal forming process, to predict the mechanical properties of the final products under the given forming conditions, to detect the possible product failures prematurely, to assist the design of a new production chain and so on. In the face of these trends of the scientific research as well as the industrial demands, different material models have been released in the market to combine the commercial FEM programs specialized in the numerical simulation of metal forming. Among others microstructure-based ow stress models show outstanding performance. Through this kind of material model not only the microstructure such as recrystallization and grain size, but also the interaction between the microstructual evolution and the working hardening, effectively ow stress, can be quantitatively represented.Towards accurate and efficient material modeling, the model parameters have to be determined conveniently and reliably. For this propose a new Hybrid strategy combining the advantages both of direct and indirect methods has been proposed using the example of StrucSim, which is a very good representative of a mircostructure-based ow stress model. At first different aspects, which lead to the disadvantages of the conventional method, i.e. direct method, were discussed. In doing so a high manganese steel was characterized as an example by stepwise graphical and regression analysis. It was found that, the precondition of direct methods, namely recording ow curves under constant Zener-hollomon-parameter conditions, are basically not possible due to both limitations of test equipment and unconquerable physical mechanism like dissipation heating. The common solution to compensate these factors may lead to further inaccuracies, uncertainties and complexities despite large testing and evaluating efforts. Further in order to improve the model quality calibrated by the conventional direct method an efficient hybrid strategy has been derived by combining inverse analysis with offline calculation of ow stress and microstructure. Three different variations of the hybrid strategy were introduced to deal with different available experimental databases, such as isothermal and non-isothermal ow curves. To demonstrate the developed routines of these three hybrid possibilities two kinds of materials including a nickel-based super alloy and a high manganese steel have been taken into account. The investigation has shown that through the introduced hybrid methods better model quality can be achieved even with less experimental data. Owing to the convenience of the inverse technique much experimental and evaluating effort and complexities can be avoided. Finally, another inverse analysis based on inhomogeneous deformation has been proposed, in which hot compression tests with double cone specimen were employed. Thanks to the inhomogeneity of strain and microstructure distribution within the specimen, it becomes possible to get sufficient relevant information as constraints for the inverse parameterization through even fewer experiments. In addition, the established routine of a hybrid strategy as well as the inverse analysis based on non-uniform deformation enhances the trans Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
Librería: Rarewaves.com USA, London, LONDO, Reino Unido
EUR 43,74
Cantidad disponible: 2 disponibles
Añadir al carritoPaperback. Condición: New. The key measure to mitigate climate change is the reduction of greenhouse gas emissions. Hereby, energy-intensive industry plays a key role due to its substantial greenhouse gas emissions. A substantial share of these greenhouse gas emissions is caused by energy supply. Thus, energy supply needs to be more efficient in industry.In large industrial sites, on-site energy systems often supply production systems. Both systems thereby optimize their operation with respect to an objective such as operational cost or revenue. This thesis provides optimization methods for these large industrial sites. The optimization methods reflect two relationships between both systems: Both systems can either follow the same objective or system-specific objectives. The same objective exists, e.g., if both systems belong to one company. System-specific objectives exist, e.g., if both systems belong to different companies.For the case that both systems follow the same objective, a method is presented for the integrated synthesis of both systems. For the same case, a method is presented for integrated scheduling to provide control reserve. For the case that energy and production systems have system-specific objectives, two cases are distinguished: incomplete and complete information exchange. For incomplete information exchange, an optimization method is introduced for the coordination between a single energy and a single production system. This optimization method is then extended to multiple energy and multiple production systems. For complete information exchange between the systems, a bilevel problem is formulated. For solving the bilevel problem, an existing solution algorithm is adapted.All methods presented in this thesis are applied to case studies, and advantages and disadvantages are examined. The case studies show that no method provides the optimal solution for the production system in all identified relationships between the systems. Thus, depending on the case at hand, the respective optimization method has to be applied. Overall, this thesis presents optimization methods for all identified relationships between energy and production systems. Thus, this thesis enables the selection of a suitable optimization method for all kind of production systems with decentralized energy supply.
Librería: GreatBookPrices, Columbia, MD, Estados Unidos de America
EUR 41,44
Cantidad disponible: 4 disponibles
Añadir al carritoCondición: New.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,76
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. The key measure to mitigate climate change is the reduction of greenhouse gas emissions. Hereby, energy-intensive industry plays a key role due to its substantial greenhouse gas emissions. A substantial share of these greenhouse gas emissions is caused by energy supply. Thus, energy supply needs to be more efficient in industry.In large industrial sites, on-site energy systems often supply production systems. Both systems thereby optimize their operation with respect to an objective such as operational cost or revenue. This thesis provides optimization methods for these large industrial sites. The optimization methods reflect two relationships between both systems: Both systems can either follow the same objective or system-specific objectives. The same objective exists, e.g., if both systems belong to one company. System-specific objectives exist, e.g., if both systems belong to different companies.For the case that both systems follow the same objective, a method is presented for the integrated synthesis of both systems. For the same case, a method is presented for integrated scheduling to provide control reserve. For the case that energy and production systems have system-specific objectives, two cases are distinguished: incomplete and complete information exchange. For incomplete information exchange, an optimization method is introduced for the coordination between a single energy and a single production system. This optimization method is then extended to multiple energy and multiple production systems. For complete information exchange between the systems, a bilevel problem is formulated. For solving the bilevel problem, an existing solution algorithm is adapted.All methods presented in this thesis are applied to case studies, and advantages and disadvantages are examined. The case studies show that no method provides the optimal solution for the production system in all identified relationships between the systems. Thus, depending on the case at hand, the respective optimization method has to be applied. Overall, this thesis presents optimization methods for all identified relationships between energy and production systems. Thus, this thesis enables the selection of a suitable optimization method for all kind of production systems with decentralized energy supply. Shipping may be from multiple locations in the US or from the UK, depending on stock availability.
EUR 43,79
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: New. Artificial metalloenzymes or biohybrid catalysts enable catalytic reactions or cascades in an aqueous environment or make existing catalysts more efficient. This interdisciplinary field between biotechnology and chemistry dates back to the late 1970' and attracted huge attention during the last decade.
Librería: PBShop.store US, Wood Dale, IL, Estados Unidos de America
EUR 43,82
Cantidad disponible: 2 disponibles
Añadir al carritoPAP. Condición: New. New Book. Shipped from UK. Established seller since 2000.
Librería: Grand Eagle Retail, Bensenville, IL, Estados Unidos de America
EUR 43,86
Cantidad disponible: 1 disponibles
Añadir al carritoPaperback. Condición: new. Paperback. Ever since humans have existed, they have impacted the earth in many different ways (Redman, 1999). Currently, important impacts are associated with the excessive use of non-renewable fossil fuels such as coal, oil and natural gas. Most fossil fuels are used for electricity generation, heating and mobility (eia, 2011), and as feedstock in the chemical industry (IEA et al., 2013). Moreover, the use of fossil fuels is associated with carbon dioxide emissions (CO2) (IEA, 2014; Leimkuhler, 2010). Emitting CO2 into the atmosphere leads to global warming and disrupts the natural carbon cycle (Stocker et al., 2013). To close the disrupted carbon cycle, CO2 can be captured and re-utilized, thereby mitigating global warming and saving fossil resources (Styring et al., 2014).CO2 can be captured from current anthropogenic CO2 sources or directly from the atmosphere. Captured CO2 can then be utilized as valuable physical product as suchor as alternative carbon feedstock for fuels, chemicals and materials. The general concept of CO2 Capture and Utilization (CCU) can be considered established: already today, CO2 is captured and utilized in processes in the chemical industry (Aresta et al., 2014). However, the scope of CO2 utilization is limited. Despite the existing industrial implementations as well as continuous progress and current efforts in CCU research, most CCU technologies are still in early stages of development. Besides the limited technological readiness, CCU is intrinsically challenging since both capture and utilization of CCU typically require substantial amounts of energy (Sakakura et al., 2007). If the provision of energy relies on fossil resources, indirect CO2 emissions are caused. Therefore, the intuitively expected environmental benefits from using CO2 are not given by default (Peters et al., 2011b). In fact, it cannot be ruled out that a tediously accomplished CCU process is finally environmentally less sustainable than a conventional fossil-based route. Therefore, it is desirable to know whether a specific CCU process is environmentally favorable. For this purpose, a reliable environmental assessment of CCU is required.As indicators for the environmental performance of CCU, a large variety of approaches are proposed ranging from qualitative design principles (Anastas andWarner, 1998) and metrics for green chemistry (Constable et al., 2002) to CCU-specific ad-hoc criteria (Peters et al., 2011b; Muller and Arlt, 2014). These approaches are rather intended to guide the development towards sustainable CCU processes than to systematically quantify the actual environmental impacts. In contrast to these approaches, Life-Cycle Assessment (LCA) is a systematic and standardized methodology to analyze the actual environmental impacts of products and processes (ISO 14040, 2009). Although LCA is frequently advocated for the environmental assessment of CCU (Aresta and Dibenedetto, 2007b; Peters et al., 2011b; Quadrelli et al., 2011), it is not yet standard practice (Schaffner et al., 2014). The reasons for this are the complexity of LCA as well as the limited data availability of many CCU processes at early design stages (Quadrelli et al., 2011). In this context, this thesis pursues two major goals: First, the thesis enables and supports the reliable environmental assessment for CCU processes using LCA. To overcome the complexity of LCA and to enable LCA novices to apply LCA to CCU, a jargon-free introduction is presented for LCA in the context of CCU. Furthermore, a framework for LCA of CCU is derived to avoid severe pitfalls in LCA of CCU. A case study for CO2-based polymers illustrates the application of LCA as well as the size and origin of environmental benefits of CCU. The second goal of this thesis is to provide an LCA-based approach to support the design of environmentally bene Shipping may be from multiple locations in the US or from the UK, depending on stock availability.