Search results for: Smart Grid
89 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 6588 Digital Twins in the Built Environment: A Systematic Literature Review
Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John
Abstract:
Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.Keywords: built environment, design, digital twins, literature review
Procedia PDF Downloads 8187 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 37786 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 16085 Innovation and Entrepreneurship in the South of China
Authors: Federica Marangio
Abstract:
This study looks at the triangle of knowledge: research-education-innovation as growth engine of an inclusive and sustainable society, where the research is the strategic process which allows the acquisition of knowledge, innovation appraises the knowledge acquired and the education is the enabling factor of the human capital to create entrepreneurial capital. Where does Italy and China stand in the global geography of innovation? Europe is calling on a smart, inclusive and sustainable growth through a specializing process that looks at the social and economic challenges, able to understand the characteristics of specific geographic areas. It is easily questionable why it is not as simple as it looks to come up with entrepreneurial ideas in all the geographic areas. Seen that the technology plus the human capital should be the means through which is possible to innovate and contribute to the boost of innovation culture, then the young educated people can be seen as the society changing agents and it becomes clear the importance of investigating the skills and competencies that lead to innovation. By starting innovation-based activities, other countries on an international level, are able now to be part of an healthy innovative ecosystem which is the result of a strong growth policy which enables innovation. Analyzing the geography of the innovation on a global scale, comes to light that the innovative entrepreneurship is the process which portrays the competitiveness of the regions in the knowledge-based economy as strategic process able to match intellectual capital and market opportunities. The level of innovative entrepreneurship is not only the result of the endogenous growth ability of the enterprises, but also by significant relations with other enterprises, universities, other centers of education and institutions. To obtain more innovative entrepreneurship is necessary to stimulate more synergy between all these territory actors in order to create, access and value existing and new knowledge ready to be disseminate. This study focuses on individual’s lived experience and the researcher believed that she can’t understand the human actions without understanding the meaning that they attribute to their thoughts, feelings, beliefs and so given she needed to understand the deeper perspectives captured through face-to face interaction. A case study approach will contribute to the betterment of knowledge in this field. This case study will represent a picture of the innovative ecosystem and the entrepreneurial mindset as a key ingredient of endogenous growth and a must for sustainable local and regional development and social cohesion. The case study will be realized analyzing two Chinese companies. A structured set of questions will be asked in order to gain details on what generated success or failure in the different situations with the past and at the moment of the research. Everything will be recorded not to lose important information during the transcription phase. While this work is not geared toward testing a priori hypotheses, it is nevertheless useful to examine whether the projects undertaken by the companies, were stimulated by enabling factors that, as result, enhanced or hampered the local innovation culture.Keywords: Entrepreneurship, education, geography of innovation, education.
Procedia PDF Downloads 41884 Bio-Hub Ecosystems: Profitability through Circularity for Sustainable Forestry, Energy, Agriculture and Aquaculture
Authors: Kimberly Samaha
Abstract:
The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding biomass as a feedstock for power plants. Yet the lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. This study analyzed data and submittals to the Born Global Maine Innovation Challenge. The Innovation Challenge was a global innovation challenge to identify process innovations that could address a ‘whole-tree’ approach of maximizing the products, byproducts, energy value and process slip-streams into a circular zero-waste design. Participating companies were at various stages of developing bioproducts and included biofuels, lignin-based products, carbon capture platforms and biochar used as both a filtration medium and as a soil amendment product. This case study shows the QCA (Qualitative Comparative Analysis) methodology of the prequalification process and the resulting techno-economic model that was developed for the maximizing profitability of the Bio-Hub Ecosystem through continuous expansion of system waste streams into valuable process inputs for co-hosts. A full site plan for the integration of co-hosts (biorefinery, land-based shrimp and salmon aquaculture farms, a tomato green-house and a hops farm) at an operating forestry-based biomass to energy plant in West Enfield, Maine USA. This model and process for evaluating the profitability not only proposes models for integration of forestry, aquaculture and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. In this particular study, profitability is assessed at two levels CAPEX (Capital Expenditures) and in OPEX (Operating Expenditures). Given that these projects start with repurposing facilities where the industrial level infrastructure is already built, permitted and interconnected to the grid, the addition of co-hosts first realizes a dramatic reduction in permitting, development times and costs. In addition, using the biomass energy plant’s waste streams such as heat, hot water, CO₂ and fly ash as valuable inputs to their operations and a significant decrease in the OPEX costs, increasing overall profitability to each of the co-hosts bottom line. This case study utilizes a proprietary techno-economic model to demonstrate how utilizing waste streams of a biomass energy plant and/or biorefinery, results in significant reduction in OPEX for both the biomass plants and the agriculture and aquaculture co-hosts. Economically viable Bio-Hubs with favorable environmental and community impacts may prove critical in garnering local and federal government support for pilot programs and more wide-scale adoption, especially for those living in severely economically depressed rural areas where aging industrial sites have been shuttered and local economies devastated.Keywords: bio-economy, biomass energy, financing, zero-waste
Procedia PDF Downloads 13483 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods
Authors: Dario Milani, Guido Morgenthal
Abstract:
Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method
Procedia PDF Downloads 26282 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 15981 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections
Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz
Abstract:
In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process
Procedia PDF Downloads 21380 Partial Discharge Characteristics of Free- Moving Particles in HVDC-GIS
Authors: Philipp Wenger, Michael Beltle, Stefan Tenbohlen, Uwe Riechert
Abstract:
The integration of renewable energy introduces new challenges to the transmission grid, as the power generation is located far from load centers. The associated necessary long-range power transmission increases the demand for high voltage direct current (HVDC) transmission lines and DC distribution grids. HVDC gas-insulated switchgears (GIS) are considered being a key technology, due to the combination of the DC technology and the long operation experiences of AC-GIS. To ensure long-term reliability of such systems, insulation defects must be detected in an early stage. Operational experience with AC systems has proven evidence, that most failures, which can be attributed to breakdowns of the insulation system, can be detected and identified via partial discharge (PD) measurements beforehand. In AC systems the identification of defects relies on the phase resolved partial discharge pattern (PRPD). Since there is no phase information within DC systems this method cannot be transferred to DC PD diagnostic. Furthermore, the behaviour of e.g. free-moving particles differs significantly at DC: Under the influence of a constant direct electric field, charge carriers can accumulate on particles’ surfaces. As a result, a particle can lift-off, oscillate between the inner conductor and the enclosure or rapidly bounces at just one electrode, which is known as firefly motion. Depending on the motion and the relative position of the particle to the electrodes, broadband electromagnetic PD pulses are emitted, which can be recorded by ultra-high frequency (UHF) measuring methods. PDs are often accompanied by light emissions at the particle’s tip which enables optical detection. This contribution investigates PD characteristics of free moving metallic particles in a commercially available 300 kV SF6-insulated HVDC-GIS. The influences of various defect parameters on the particle motion and the PD characteristic are evaluated experimentally. Several particle geometries, such as cylinder, lamella, spiral and sphere with different length, diameter and weight are determined. The applied DC voltage is increased stepwise from inception voltage up to UDC = ± 400 kV. Different physical detection methods are used simultaneously in a time-synchronized setup. Firstly, the electromagnetic waves emitted by the particle are recorded by an UHF measuring system. Secondly, a photomultiplier tube (PMT) detects light emission with a wavelength in the range of λ = 185…870 nm. Thirdly, a high-speed camera (HSC) tracks the particle’s motion trajectory with high accuracy. Furthermore, an electrically insulated electrode is attached to the grounded enclosure and connected to a current shunt in order to detect low frequency ion currents: The shunt measuring system’s sensitivity is in the range of 10 nA at a measuring bandwidth of bw = DC…1 MHz. Currents of charge carriers, which are generated at the particle’s tip migrate through the gas gap to the electrode and can be recorded by the current shunt. All recorded PD signals are analyzed in order to identify characteristic properties of different particles. This includes e.g. repetition rates and amplitudes of successive pulses, characteristic frequency ranges and detected signal energy of single PD pulses. Concluding, an advanced understanding of underlying physical phenomena particle motion in direct electric field can be derived.Keywords: current shunt, free moving particles, high-speed imaging, HVDC-GIS, UHF
Procedia PDF Downloads 16079 Optimizing Heavy-Duty Green Hydrogen Refueling Stations: A Techno-Economic Analysis of Turbo-Expander Integration
Authors: Christelle Rabbat, Carole Vouebou, Sary Awad, Alan Jean-Marie
Abstract:
Hydrogen has been proven to be a viable alternative to standard fuels as it is easy to produce and only generates water vapour and zero carbon emissions. However, despite the hydrogen benefits, the widespread adoption of hydrogen fuel cell vehicles and internal combustion engine vehicles is impeded by several challenges. The lack of refueling infrastructures remains one of the main hindering factors due to the high costs associated with their design, construction, and operation. Besides, the lack of hydrogen vehicles on the road diminishes the economic viability of investing in refueling infrastructure. Simultaneously, the absence of accessible refueling stations discourages consumers from adopting hydrogen vehicles, perpetuating a cycle of limited market uptake. To address these challenges, the implementation of adequate policies incentivizing the use of hydrogen vehicles and the reduction of the investment and operation costs of hydrogen refueling stations (HRS) are essential to put both investors and customers at ease. Even though the transition to hydrogen cars has been rather slow, public transportation companies have shown a keen interest in this highly promising fuel. Besides, their hydrogen demand is easier to predict and regulate than personal vehicles. Due to the reduced complexity of designing a suitable hydrogen supply chain for public vehicles, this sub-sector could be a great starting point to facilitate the adoption of hydrogen vehicles. Consequently, this study will focus on designing a chain of on-site green HRS for the public transportation network in Nantes Metropole leveraging the latest relevant technological advances aiming to reduce the costs while ensuring reliability, safety, and ease of access. To reduce the cost of HRS and encourage their widespread adoption, a network of 7 H35-T40 HRS has been designed, replacing the conventional J-T valves with turbo-expanders. Each station in the network has a daily capacity of 1,920 kg. Thus, the HRS network can produce up to 12.5 tH2 per day. The detailed cost analysis has revealed a CAPEX per station of 16.6 M euros leading to a network CAPEX of 116.2 M euros. The proposed station siting prioritized Nantes metropole’s 5 bus depots and included 2 city-centre locations. Thanks to the turbo-expander technology, the cooling capacity of the proposed HRS is 19% lower than that of a conventional station equipped with J-T valves, resulting in significant CAPEX savings estimated at 708,560 € per station, thus nearly 5 million euros for the whole HRS network. Besides, the turbo-expander power generation ranges from 7.7 to 112 kW. Thus, the power produced can be used within the station or sold as electricity to the main grid, which would, in turn, maximize the station’s profit. Despite the substantial initial investment required, the environmental benefits, cost savings, and energy efficiencies realized through the transition to hydrogen fuel cell buses and the deployment of HRS equipped with turbo-expanders offer considerable advantages for both TAN and Nantes Metropole. These initiatives underscore their enduring commitment to fostering green mobility and combatting climate change in the long term.Keywords: green hydrogen, refueling stations, turbo-expander, heavy-duty vehicles
Procedia PDF Downloads 5678 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks
Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios
Abstract:
To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand
Procedia PDF Downloads 14277 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses
Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen
Abstract:
Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.Keywords: ehealth, one health, risk communication, stakeholder, zoonosis
Procedia PDF Downloads 28676 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 11775 The Design of a Phase I/II Trial of Neoadjuvant RT with Interdigitated Multiple Fractions of Lattice RT for Large High-grade Soft-Tissue Sarcoma
Authors: Georges F. Hatoum, Thomas H. Temple, Silvio Garcia, Xiaodong Wu
Abstract:
Soft Tissue Sarcomas (STS) represent a diverse group of malignancies with heterogeneous clinical and pathological features. The treatment of extremity STS aims to achieve optimal local tumor control, improved survival, and preservation of limb function. The National Comprehensive Cancer Network guidelines, based on the cumulated clinical data, recommend radiation therapy (RT) in conjunction with limb-sparing surgery for large, high-grade STS measuring greater than 5 cm in size. Such treatment strategy can offer a cure for patients. However, when recurrence occurs (in nearly half of patients), the prognosis is poor, with a median survival of 12 to 15 months and with only palliative treatment options available. The spatially-fractionated-radiotherapy (SFRT), with a long history of treating bulky tumors as a non-mainstream technique, has gained new attention in recent years due to its unconventional therapeutic effects, such as bystander/abscopal effects. Combining single fraction of GRID, the original form of SFRT, with conventional RT was shown to have marginally increased the rate of pathological necrosis, which has been recognized to have a positive correlation to overall survival. In an effort to consistently increase the pathological necrosis rate over 90%, multiple fractions of Lattice RT (LRT), a newer form of 3D SFRT, interdigitated with the standard RT as neoadjuvant therapy was conducted in a preliminary clinical setting. With favorable results of over 95% of necrosis rate in a small cohort of patients, a Phase I/II clinical study was proposed to exam the safety and feasibility of this new strategy. Herein the design of the clinical study is presented. In this single-arm, two-stage phase I/II clinical trial, the primary objectives are >80% of the patients achieving >90% tumor necrosis and to evaluation the toxicity; the secondary objectives are to evaluate the local control, disease free survival and overall survival (OS), as well as the correlation between clinical response and the relevant biomarkers. The study plans to accrue patients over a span of two years. All patient will be treated with the new neoadjuvant RT regimen, in which one of every five fractions of conventional RT is replaced by a LRT fraction with vertices receiving dose ≥10Gy while keeping the tumor periphery at or close to 2 Gy per fraction. Surgical removal of the tumor is planned to occur 6 to 8 weeks following the completion of radiation therapy. The study will employ a Pocock-style early stopping boundary to ensure patient safety. The patients will be followed and monitored for a period of five years. Despite much effort, the rarity of the disease has resulted in limited novel therapeutic breakthroughs. Although a higher rate of treatment-induced tumor necrosis has been associated with improved OS, with the current techniques, only 20% of patients with large, high-grade tumors achieve a tumor necrosis rate exceeding 50%. If this new neoadjuvant strategy is proven effective, an appreciable improvement in clinical outcome without added toxicity can be anticipated. Due to the rarity of the disease, it is hoped that such study could be orchestrated in a multi-institutional setting.Keywords: lattice RT, necrosis, SFRT, soft tissue sarcoma
Procedia PDF Downloads 6074 Mobile Phones, (Dis) Empowerment and Female Headed Households: Trincomalee, Sri Lanka
Authors: S. A. Abeykoon
Abstract:
This study explores the empowerment potential of the mobile phone, the widely penetrated and greatly affordable communication technology in Sri Lanka, for female heads of households in Trincomalee District, Sri Lanka-an area recovering from the effects of a 30-year civil war and the 2004 Boxing Day Tsunami. It also investigates how the use of mobile phones by these women is shaped and appropriated by the gendered power relations and inequalities in their respective communities and by their socio-economic factors and demographic characteristics. This qualitative study is based on the epistemology of constructionism; interpretivist, functionalist and critical theory approaches; and the process of action research. The data collection was conducted from September 2014 to November 2014 in two Divisional Secretaries of the Trincomalee District, Sri Lanka. A total of 30 semi-structured depth interviews and six focus groups with the female heads of households of Sinhalese, Tamil and Muslim ethnicities were conducted using purposive, representative and snowball sampling methods. The Grounded theory method was used to analyze transcribed interviews, focus group discussions and field notes that were coded and categorized in accordance with the research questions and the theoretical framework of the study. The findings of the study indicated that the mobile phone has mainly enabled the participants to balance their income earning activities and family responsibilities and has been useful in maintaining their family and social relationships, occupational duties and in making decisions. Thus, it provided them a higher level of security, safety, reassurance and self-confidence in carrying out their daily activities. They also practiced innovative strategies for the effective and efficient use of their mobile expenses. Although participants whose husbands or relatives have migrated were more tended to use smart phones, mobile literacy level of the majority of the participants was at a lower level limited to making and receiving calls and using SMS (Short Message Service) services. However, their interaction with the mobile phone was significantly shaped by the gendered power relations and their multiple identities based on their ethnicity, religion, class, education, profession and age. Almost all the participants were precautious of giving their mobile numbers to and have been harassed with ‘nuisance calls’ from men. For many, ownership and use of their mobile phone was shaped and influenced by their children and migrated husbands. Although these practices limit their use of the technology, there were many instances that they challenged these gendered harassments. While man-made and natural destructions have disempowered and victimized the women in the Sri Lankan society, they have also liberated women making them stronger and transforming their agency and traditional gender roles. Therefore, their present position in society is reflected in their mobile phone use as they assist such women to be more self-reliant and liberated, yet making them disempowered at some time.Keywords: mobile phone, gender power relations, empowerment, female heads of households
Procedia PDF Downloads 33673 Real-Time Neuroimaging for Rehabilitation of Stroke Patients
Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge
Abstract:
Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation
Procedia PDF Downloads 38772 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)
Authors: Eric Pla Erra, Mariana Jimenez Martinez
Abstract:
While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)
Procedia PDF Downloads 10571 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin
Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng
Abstract:
The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin
Procedia PDF Downloads 7670 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 11669 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 14768 An Integrated Real-Time Hydrodynamic and Coastal Risk Assessment Model
Authors: M. Reza Hashemi, Chris Small, Scott Hayward
Abstract:
The Northeast Coast of the US faces damaging effects of coastal flooding and winds due to Atlantic tropical and extratropical storms each year. Historically, several large storm events have produced substantial levels of damage to the region; most notably of which were the Great Atlantic Hurricane of 1938, Hurricane Carol, Hurricane Bob, and recently Hurricane Sandy (2012). The objective of this study was to develop an integrated modeling system that could be used as a forecasting/hindcasting tool to evaluate and communicate the risk coastal communities face from these coastal storms. This modeling system utilizes the ADvanced CIRCulation (ADCIRC) model for storm surge predictions and the Simulating Waves Nearshore (SWAN) model for the wave environment. These models were coupled, passing information to each other and computing over the same unstructured domain, allowing for the most accurate representation of the physical storm processes. The coupled SWAN-ADCIRC model was validated and has been set up to perform real-time forecast simulations (as well as hindcast). Modeled storm parameters were then passed to a coastal risk assessment tool. This tool, which is generic and universally applicable, generates spatial structural damage estimate maps on an individual structure basis for an area of interest. The required inputs for the coastal risk model included a detailed information about the individual structures, inundation levels, and wave heights for the selected region. Additionally, calculation of wind damage to structures was incorporated. The integrated coastal risk assessment system was then tested and applied to Charlestown, a small vulnerable coastal town along the southern shore of Rhode Island. The modeling system was applied to Hurricane Sandy and a synthetic storm. In both storm cases, effect of natural dunes on coastal risk was investigated. The resulting damage maps for the area (Charlestown) clearly showed that the dune eroded scenarios affected more structures, and increased the estimated damage. The system was also tested in forecast mode for a large Nor’Easters: Stella (March 2017). The results showed a good performance of the coupled model in forecast mode when compared to observations. Finally, a nearshore model XBeach was then nested within this regional grid (ADCIRC-SWAN) to simulate nearshore sediment transport processes and coastal erosion. Hurricane Irene (2011) was used to validate XBeach, on the basis of a unique beach profile dataset at the region. XBeach showed a relatively good performance, being able to estimate eroded volumes along the beach transects with a mean error of 16%. The validated model was then used to analyze the effectiveness of several erosion mitigation methods that were recommended in a recent study of coastal erosion in New England: beach nourishment, coastal bank (engineered core), and submerged breakwater as well as artificial surfing reef. It was shown that beach nourishment and coastal banks perform better to mitigate shoreline retreat and coastal erosion.Keywords: ADCIRC, coastal flooding, storm surge, coastal risk assessment, living shorelines
Procedia PDF Downloads 11667 Probing Scientific Literature Metadata in Search for Climate Services in African Cities
Authors: Zohra Mhedhbi, Meheret Gaston, Sinda Haoues-Jouve, Julia Hidalgo, Pierre Mazzega
Abstract:
In the current context of climate change, supporting national and local stakeholders to make climate-smart decisions is necessary but still underdeveloped in many countries. To overcome this problem, the Global Frameworks for Climate Services (GFCS), implemented under the aegis of the United Nations in 2012, has initiated many programs in different countries. The GFCS contributes to the development of Climate Services, an instrument based on the production and transfer of scientific climate knowledge for specific users such as citizens, urban planning actors, or agricultural professionals. As cities concentrate on economic, social and environmental issues that make them more vulnerable to climate change, the New Urban Agenda (NUA), adopted at Habitat III in October 2016, highlights the importance of paying particular attention to disaster risk management, climate and environmental sustainability and urban resilience. In order to support the implementation of the NUA, the World Meteorological Organization (WMO) has identified the urban dimension as one of its priorities and has proposed a new tool, the Integrated Urban Services (IUS), for more sustainable and resilient cities. In the southern countries, there’s a lack of development of climate services, which can be partially explained by problems related to their economic financing. In addition, it is often difficult to make climate change a priority in urban planning, given the more traditional urban challenges these countries face, such as massive poverty, high population growth, etc. Climate services and Integrated Urban Services, particularly in African cities, are expected to contribute to the sustainable development of cities. These tools will help promoting the acquisition of meteorological and socio-ecological data on their transformations, encouraging coordination between national or local institutions providing various sectoral urban services, and should contribute to the achievement of the objectives defined by the United Nations Framework Convention on Climate Change (UNFCCC) or the Paris Agreement, and the Sustainable Development Goals. To assess the state of the art on these various points, the Web of Science metadatabase is queried. With a query combining the keywords "climate*" and "urban*", more than 24,000 articles are identified, source of more than 40,000 distinct keywords (but including synonyms and acronyms) which finely mesh the conceptual field of research. The occurrence of one or more names of the 514 African cities of more than 100,000 inhabitants or countries, reduces this base to a smaller corpus of about 1410 articles (2990 keywords). 41 countries and 136 African cities are cited. The lexicometric analysis of the metadata of the articles and the analysis of the structural indicators (various centralities) of the networks induced by the co-occurrence of expressions related more specifically to climate services show the development potential of these services, identify the gaps which remain to be filled for their implementation and allow to compare the diversity of national and regional situations with regard to these services.Keywords: African cities, climate change, climate services, integrated urban services, lexicometry, networks, urban planning, web of science
Procedia PDF Downloads 19566 Electrical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: electrical disaggregation, DTW, general appliance modeling, event detection
Procedia PDF Downloads 7865 Numerical Analysis of Mandible Fracture Stabilization System
Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski
Abstract:
The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis
Procedia PDF Downloads 27364 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting
Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola
Abstract:
Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.Keywords: accelerometer, health, mobile applications, physical activity, physical performance
Procedia PDF Downloads 19663 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13662 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment
Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali
Abstract:
This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis
Procedia PDF Downloads 42861 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 8260 Addressing Housing Issue at Regional Level Planning: A Case Study of Mumbai Metropolitan Region
Authors: Bhakti Chitale
Abstract:
Mumbai city, which is the business capital of India and one of the most crowded cities in the world, holds the biggest slum in Asia. The Mumbai Metropolitan Region (MMR) occupies an area of 4035 sq.km. with a population of 22.8 million people. This population is mostly urban with 91% of this population living in areas of Municipal Corporations and Councils. Another 3% live in Census Towns. The region has 9 Municipal Corporations, 8 Municipal councils, and around 1000 villages. On the one hand MMR reflects the highest contribution to the Nations overall economy and on the other hand it shows the horrible and intolerable picture of about 2 million people, who are living in slums/without even slum with totally unhygienic conditions and with total loss of hope. The generations are about to get affected adversely if the solution is not worked out. This study is an attempt towards working out the solution. Mumbai Metropolitan Region Development Authority (MMRDA) is state government's authority, specially formed to govern the development of MMR. MMRDA is engaged in long term planning, promotion of new growth centres, implementation of strategic projects and financing infrastructure development. While preparing the master plan for MMR for next 20 years MMRDA conducted a detail study regarding Housing scenario in MMR and possible options for improvement. The author was the in charge officer for the said assignment. This paper puts light on the interesting outcomes of the research study, which ranges from the adverse effects of government policies, automatic responses of housing market, effects on planning processes, and overall changing needs of housing patterns in the world due to changes in the social mechanism. It alarms the urban planners who usually focus on smart infrastructure development, about allied future dangers. This housing study will explain the complexities, realities and needs of innovations in the housing policies all over the world. The paper will explain further few success stories and failure stories of government initiatives with reasons. It gives the clear idea about the differences in needs of housing for people from different economic groups and direct and indirect market pressures on low cost housing. Magical phenomenon came in front like a large percentage of vacant houses is present in spite of the huge need. Housing market gets affected by the developments or any other physical and financial changes taking place in the nearby areas or cities, also by changes in cities which are located far from the region and also by the international investments or policy changes. Instead of just depending on governments actions in case of generation of affordable housing, it becomes equally important to make the housing markets automatically generate such stock and still make them sustainable is the aim of all the movement. In summary, we may say that the paper will sequentially elaborate the complete dynamics of housing in one of the most crowded urban area in the world that is Mumbai Metropolitan Region, with a lot of data, analysis, case studies, and recommendations.Keywords: Mumbai India, slum housing, region planning, market recommendations
Procedia PDF Downloads 280