Search results for: control and optimization techniques
15708 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System
Authors: Andy Alubaidy
Abstract:
In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing
Procedia PDF Downloads 21715707 Challenges to Tuberculosis Control in Angola: The Narrative of Medical Professionals
Authors: Domingos Vita, Patrick Brady
Abstract:
Background: There is a tuberculosis (TB) epidemic in Angola that has been getting worse for more than a decade despite the active implementation of the DOTS strategy. The aim of this study was to directly interrogate healthcare workers involved in TB control on what they consider to be the drivers of the TB epidemic in Angola. Methods: Twenty four in-depth qualitative interviews were conducted with medical staff working in this field in the provinces of Luanda and Benguela. Results: The healthcare professionals see the migrant working poor as a particular problem for the control of TB. These migrants are constructed as ‘Rural People’ and are seen as non-compliant and late-presenting. This is a stigmatized and marginal group contending with the additional stigma associated with TB infection. The healthcare professionals interviewed also see the interruption of treatment and self medication generally as a better explanation for the TB epidemic than urbanization or lack of medication. Conclusions: The local narrative is in contrast to previous explanations used elsewhere in the developing world. To be effective policy must recognize the local issues of the migrant workforce, interruption of treatment and the stigma associated with TB in Angola.Keywords: Africa, Angola, migrants, qualitative, research, tuberculosis
Procedia PDF Downloads 16115706 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 14715705 Jitter Based Reconstruction of Transmission Line Pulse Using On-Chip Sensor
Authors: Bhuvnesh Narayanan, Bernhard Weiss, Tvrtko Mandic, Adrijan Baric
Abstract:
This paper discusses a method to reconstruct internal high-frequency signals through subsampling techniques in an IC using an on-chip sensor. Though there are existing methods to internally probe and reconstruct high frequency signals through subsampling techniques; these methods have been applicable mainly for synchronized systems. This paper demonstrates a method for making such non-intrusive on-chip reconstructions possible also in non-synchronized systems. The TLP pulse is used to demonstrate the experimental validation of the concept. The on-chip sensor measures the voltage in an internal node. The jitter in the input pulse causes a varying pulse delay with respect to the on-chip sampling command. By measuring this pulse delay and by correlating it with the measured on-chip voltage, time domain waveforms can be reconstructed, and the influence of the pulse on the internal nodes can be better understood.Keywords: on-chip sensor, jitter, transmission line pulse, subsampling
Procedia PDF Downloads 14615704 Integrated Thermal Control to Improve Workers' Intellectual Concentration in Office Environment
Authors: Kimi Ueda, Kosuke Sugita, Soma Kawamoto, Hiroshi Shimoda, Hirotake Ishii, Fumiaki Obayashi, Kazuhiro Taniguchi, Ayaka Suzuki
Abstract:
The authors have focused on the thermal difference between office rooms and break rooms, and proposed an integrated thermal control method to improve workers’ intellectual concentration. First, a trial experiment was conducted to verify the effect of temperature difference on workers’ intellectual concentration with using two experimental rooms; a thermally neutral break room and a cooler office room. As the result of the experiment, it was found that the thermal difference had a significant effect on improving their intellectual concentration. Workers, however, often take a short break at their desks without moving to a break room, so that the thermal difference cannot be given to them. So utilization of airflow was proposed as an integrated thermal control method instead of the temperature difference to realize the similar effect. Concretely, they are exposed to airflow when working in order to reduce their effective temperature while it is weakened when taking a break. Another experiment was conducted to confirm the effect of the airflow control on their intellectual concentration. As the result of concentration index and questionnaire survey, their intellectual concentration was significantly improved in the integrated thermal controlled environment. It was also found that most of them felt more comfortable and had higher motivation and higher degree of concentration in the environment.Keywords: airflow, evaluation experiment, intellectual concentration, thermal difference
Procedia PDF Downloads 29415703 Autobiographical Memory Functions and Perceived Control in Depressive Symptoms among Young Adults
Authors: Meenu S. Babu, K. Jayasankara Reddy
Abstract:
Depression is a serious mental health concern that leads to significant distress and dysfunction in an individual. Due to the high physical, psychological, social, and economic burden it causes, it is important to study various bio-psycho-social factors that influence the onset, course, duration, intensity of depressive symptoms. The study aims to explore relationship between autobiographical memory (AM) functions, perceived control over stressful events and depressive symptoms. AM functions and perceived control were both found to be protective factors for individuals against depression and were both modifiable to predict better behavioral and affective outcomes. An extensive review of literatur, with a systematic search on Google Scholar, JSTOR, Science Direct and Springer Journals database, was conducted for the purpose of this review paper. These were used for all the aforementioned databases. The time frame used for the search was 2010-2021. An additional search was conducted with no time bar to map the development of the theoretical concepts. The relevant studies with quantitative, qualitative, experimental, and quasi- experimental research designs were included for the review. Studies including a sample with a DSM- 5 or ICD-10 diagnosis of depressive disorders were excluded from the study to focus on the behavioral patterns in a non-clinical population. The synthesis of the findings that were obtained from the review indicates there is a significant relationship between cognitive variables of AM functions and perceived control and depressive symptoms. AM functions were found to be have significant effects on once sense of self, interpersonal relationships, decision making, self- continuity and were related to better emotion regulation and lower depressive symptoms. Not all the components of AM function were equally significant in their relationships with various depressive symptoms. While self and directive functions were more related to emotion regulation, anhedonia, motivation and hence mood and affect, the social function was related to perceived social support and social engagement. Perceived control was found to be another protective cognitive factor that provides individuals a sense of agency and control over one’s life outcomes which was found to be low in individuals with depression. This was also associated to the locus of control, competency beliefs, contingency beliefs and subjective well being in individuals and acted as protective factors against depressive symptoms. AM and perceived control over stressful events serve adaptive functions, hence it is imperative to study these variables more extensively. They can be imperative in planning and implementing therapeutic interventions to foster these cognitive protective factors to mitigate or alleviate depressive symptoms. Exploring AM as a determining factor in depressive symptoms along with perceived control over stress creates a bridge between biological and cognitive factors underlying depression and increases the scope of developing a more eclectic and effective treatment plan for individuals. As culture plays a crucial role in AM functions as well as certain aspects of control such as locus of control, it is necessary to study these variables keeping in mind the cultural context to tailor culture/community specific interventions for depression.Keywords: autobiographical memories, autobiographical memory functions, perceived control, depressive symptoms, depression, young adults
Procedia PDF Downloads 10415702 An Overview of Corroded Pipe Repair Techniques Using Composite Materials
Authors: Lim Kar Sing, Siti Nur Afifah Azraai, Norhazilan Md Noor, Nordin Yahaya
Abstract:
Polymeric composites are being increasingly used as repair material for repairing critical infrastructures such as building, bridge, pressure vessel, piping and pipeline. Technique in repairing damaged pipes is one of the major concerns of pipeline owners. Considerable researches have been carried out on the repair of corroded pipes using composite materials. This article attempts a short review of the subject matter to provide insight into various techniques used in repairing corroded pipes, focusing on a wide range of composite repair systems. These systems including pre-cured layered, flexible wet lay-up, pre-impregnated, split composite sleeve and flexible tape systems. Both advantages and limitations of these repair systems were highlighted. Critical technical aspects have been discussed through the current standards and practices. Research gaps and future study scopes in achieving more effective design philosophy are also presented.Keywords: composite materials, pipeline, repair technique, polymers
Procedia PDF Downloads 51015701 Reducing The Frequency of Flooding Accompanied by Low pH Wastewater In 100/200 Unit of Phosphate Fertilizer 1 Plant by Implementing The 3R Program (Reduce, Reuse and Recycle)
Authors: Pradipta Risang Ratna Sambawa, Driya Herseta, Mahendra Fajri Nugraha
Abstract:
In 2020, PT Petrokimia Gresik implemented a program to increase the ROP (Run Of Pile) production rate at the Phosphate Fertilizer 1 plant, causing an increase in scrubbing water consumption in the 100/200 area unit. This increase in water consumption causes a higher discharge of wastewater, which can further cause local flooding, especially during the rainy season. The 100/200 area of the Phosphate Fertilizer 1 plant is close to the warehouse and is often a passing area for trucks transporting raw materials. This causes the pH in the wastewater to become acidic (the worst point is up to pH 1). The problem of flooding and exposure to acidic wastewater in the 100/200 area of Phosphate Fertilizer Plant 1 was then resolved by PT Petrokimia Gresik through wastewater optimization steps called the 3R program (Reduce, Reuse, and Recycle). The 3R (Reduce, reuse, and recycle) program consists of an air consumption reduction program by considering the liquid/gas ratio in scrubbing unit of 100/200 Phosphate Fertilizer 1 plant, creating a wastewater interconnection line so that wastewater from unit 100/200 can be used as scrubbing water in the Phonska 1, Phonska 2, Phonska 3 and unit 300 Phosphate Fertilizer 1 plant and increasing scrubbing effectiveness through scrubbing effectiveness simulations. Through a series of wastewater optimization programs, PT Petrokimia Gresik has succeeded in reducing NaOH consumption for neutralization up to 2,880 kg/day or equivalent in saving up to 314,359.76 dollars/year and reducing process water consumption up to 600 m3/day or equivalent in saving up to 63,739.62 dollars/year.Keywords: fertilizer, phosphate fertilizer, wastewater, wastewater treatment, water management
Procedia PDF Downloads 2715700 Superamolecular Chemistry and Packing of FAMEs in the Liquid Phase for Optimization of Combustion and Emission
Authors: Zeev Wiesman, Paula Berman, Nitzan Meiri, Charles Linder
Abstract:
Supramolecular chemistry refers to the domain of chemistry beyond that of molecules and focuses on the chemical systems made up of a discrete number of assembled molecular sub units or components. Biodiesel components self arrangements is closely related/affect their physical properties in combustion systems and emission. Due to technological difficulties, knowledge regarding the molecular packing of FAMEs (biodiesel) in the liquid phase is limited. Spectral tools such as X-ray and NMR are known to provide evidences related to molecular structure organization. Recently, it was reported by our research group that using 1H Time Domain NMR methodology based on relaxation time and self diffusion coefficients, FAMEs clusters with different motilities can be accurately studied in the liquid phase. Head to head dimarization with quasi-smectic clusters organization, based on molecular motion analysis, was clearly demonstrated. These findings about the assembly/packing of the FAME components are directly associated with fluidity/viscosity of the biodiesel. Furthermore, these findings may provide information of micro/nano-particles that are formed in the delivery and injection system of various combustion systems (affected by thermodynamic conditions). Various relevant parameters to combustion such as: distillation/Liquid Gas phase transition, cetane number/ignition delay, shoot, oxidation/NOX emission maybe predicted. These data may open the window for further optimization of FAME/diesel mixture in terms of combustion and emission.Keywords: supermolecular chemistry, FAMEs, liquid phase, fluidity, LF-NMR
Procedia PDF Downloads 34115699 Challenges and Insights by Electrical Characterization of Large Area Graphene Layers
Authors: Marcus Klein, Martina GrießBach, Richard Kupke
Abstract:
The current advances in the research and manufacturing of large area graphene layers are promising towards the introduction of this exciting material in the display industry and other applications that benefit from excellent electrical and optical characteristics. New production technologies in the fabrication of flexible displays, touch screens or printed electronics apply graphene layers on non-metal substrates and bring new challenges to the required metrology. Traditional measurement concepts of layer thickness, sheet resistance, and layer uniformity, are difficult to apply to graphene production processes and are often harmful to the product layer. New non-contact sensor concepts are required to adapt to the challenges and even the foreseeable inline production of large area graphene. Dedicated non-contact measurement sensors are a pioneering method to leverage these issues in a large variety of applications, while significantly lowering the costs of development and process setup. Transferred and printed graphene layers can be characterized with high accuracy in a huge measurement range using a very high resolution. Large area graphene mappings are applied for process optimization and for efficient quality control for transfer, doping, annealing and stacking processes. Examples of doped, defected and excellent Graphene are presented as quality images and implications for manufacturers are explained.Keywords: graphene, doping and defect testing, non-contact sheet resistance measurement, inline metrology
Procedia PDF Downloads 30715698 An Agile, Intelligent and Scalable Framework for Global Software Development
Authors: Raja Asad Zaheer, Aisha Tanveer, Hafza Mehreen Fatima
Abstract:
Global Software Development (GSD) is becoming a common norm in software industry, despite of the fact that global distribution of the teams presents special issues for effective communication and coordination of the teams. Now trends are changing and project management for distributed teams is no longer in a limbo. GSD can be effectively established using agile and project managers can use different agile techniques/tools for solving the problems associated with distributed teams. Agile methodologies like scrum and XP have been successfully used with distributed teams. We have employed exploratory research method to analyze different recent studies related to challenges of GSD and their proposed solutions. In our study, we had deep insight in six commonly faced challenges: communication and coordination, temporal differences, cultural differences, knowledge sharing/group awareness, speed and communication tools. We have established that each of these challenges cannot be neglected for distributed teams of any kind. They are interlinked and as an aggregated whole can cause the failure of projects. In this paper we have focused on creating a scalable framework for detecting and overcoming these commonly faced challenges. In the proposed solution, our objective is to suggest agile techniques/tools relevant to a particular problem faced by the organizations related to the management of distributed teams. We focused mainly on scrum and XP techniques/tools because they are widely accepted and used in the industry. Our solution identifies the problem and suggests an appropriate technique/tool to help solve the problem based on globally shared knowledgebase. We can establish a cause and effect relationship using a fishbone diagram based on the inputs provided for issues commonly faced by organizations. Based on the identified cause, suitable tool is suggested, our framework suggests a suitable tool. Hence, a scalable, extensible, self-learning, intelligent framework proposed will help implement and assess GSD to achieve maximum out of it. Globally shared knowledgebase will help new organizations to easily adapt best practices set forth by the practicing organizations.Keywords: agile project management, agile tools/techniques, distributed teams, global software development
Procedia PDF Downloads 31415697 An Autonomous Space Debris-Removal System for Effective Space Missions
Authors: Shriya Chawla, Vinayak Malhotra
Abstract:
Space exploration has noted an exponential rise in the past two decades. The world has started probing the alternatives for efficient and resourceful sustenance along with utilization of advanced technology viz., satellites on earth. Space propulsion forms the core of space exploration. Of all the issues encountered, space debris has increasingly threatened the space exploration and propulsion. The efforts have resulted in the presence of disastrous space debris fragments orbiting the earth at speeds up to several kilometres per hour. Debris are well known as a potential damage to the future missions with immense loss of resources, mankind, and huge amount of money is invested in active research on them. Appreciable work had been done in the past relating to active space debris-removal technologies such as harpoon, net, drag sail. The primary emphasis is laid on confined removal. In recently, remove debris spacecraft was used for servicing and capturing cargo ships. Airbus designed and planned the debris-catching net experiment, aboard the spacecraft. The spacecraft represents largest payload deployed from the space station. However, the magnitude of the issue suggests that active space debris-removal technologies, such as harpoons and nets, still would not be enough. Thus, necessitating the need for better and operative space debris removal system. Techniques based on diverting the path of debris or the spacecraft to avert damage have turned out minimal usage owing to limited predictions. Present work focuses on an active hybrid space debris removal system. The work is motivated by the need to have safer and efficient space missions. The specific objectives of the work are 1) to thoroughly analyse the existing and conventional debris removal techniques, their working, effectiveness and limitations under varying conditions, 2) to understand the role of key controlling parameters in coupled operation of debris capturing and removal. The system represents the utilization of the latest autonomous technology available with an adaptable structural design for operations under varying conditions. The design covers advantages of most of the existing technologies while removing the disadvantages. The system is likely to enhance the probability of effective space debris removal. At present, systematic theoretical study is being carried out to thoroughly observe the effects of pseudo-random debris occurrences and to originate an optimal design with much better features and control.Keywords: space exploration, debris removal, space crafts, space accidents
Procedia PDF Downloads 16915696 Learning Object Interface Adapted to the Learner's Learning Style
Authors: Zenaide Carvalho da Silva, Leandro Rodrigues Ferreira, Andrey Ricardo Pimentel
Abstract:
Learning styles (LS) refer to the ways and forms that the student prefers to learn in the teaching and learning process. Each student has their own way of receiving and processing information throughout the learning process. Therefore, knowing their LS is important to better understand their individual learning preferences, and also, understand why the use of some teaching methods and techniques give better results with some students, while others it does not. We believe that knowledge of these styles enables the possibility of making propositions for teaching; thus, reorganizing teaching methods and techniques in order to allow learning that is adapted to the individual needs of the student. Adapting learning would be possible through the creation of online educational resources adapted to the style of the student. In this context, this article presents the structure of a learning object interface adaptation based on the LS. The structure created should enable the creation of the adapted learning object according to the student's LS and contributes to the increase of student’s motivation in the use of a learning object as an educational resource.Keywords: adaptation, interface, learning object, learning style
Procedia PDF Downloads 40615695 Redesigning the Plant Distribution of an Industrial Laundry in Arequipa
Authors: Ana Belon Hercilla
Abstract:
The study is developed in “Reactivos Jeans” company, in the city of Arequipa, whose main business is the laundry of garments at an industrial level. In 2012 the company initiated actions to provide a dry cleaning service of alpaca fiber garments, recognizing that this item is in a growth phase in Peru. Additionally this company took the initiative to use a new greenwashing technology which has not yet been developed in the country. To accomplish this, a redesign of both the process and the plant layout was required. For redesigning the plant, the methodology used was the Systemic Layout Planning, allowing this study divided into four stages. First stage is the information gathering and evaluation of the initial situation of the company, for which a description of the areas, facilities and initial equipment, distribution of the plant, the production process and flows of major operations was made. Second stage is the development of engineering techniques that allow the logging and analysis procedures, such as: Flow Diagram, Route Diagram, DOP (process flowchart), DAP (analysis diagram). Then the planning of the general distribution is carried out. At this stage, proximity factors of the areas are established, the Diagram Paths (TRA) is developed, and the Relational Diagram Activities (DRA). In order to obtain the General Grouping Diagram (DGC), further information is complemented by a time study and Guerchet method is used to calculate the space requirements for each area. Finally, the plant layout redesigning is presented and the implementation of the improvement is made, making it possible to obtain a model much more efficient than the initial design. The results indicate that the implementation of the new machinery, the adequacy of the plant facilities and equipment relocation resulted in a reduction of the production cycle time by 75.67%, routes were reduced by 68.88%, the number of activities during the process were reduced by 40%, waits and storage were removed 100%.Keywords: redesign, time optimization, industrial laundry, greenwashing
Procedia PDF Downloads 39415694 Multi-Criteria Decision Making Network Optimization for Green Supply Chains
Authors: Bandar A. Alkhayyal
Abstract:
Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains
Procedia PDF Downloads 16015693 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 17015692 Energy Audit and Renovation Scenarios for a Historical Building in Rome: A Pilot Case Towards the Zero Emission Building Goal
Authors: Domenico Palladino, Nicolandrea Calabrese, Francesca Caffari, Giulia Centi, Francesca Margiotta, Giovanni Murano, Laura Ronchetti, Paolo Signoretti, Lisa Volpe, Silvia Di Turi
Abstract:
The aim to achieve a fully decarbonized building stock by 2050 stands as one of the most challenging issues within the spectrum of energy and climate objectives. Numerous strategies are imperative, particularly emphasizing the reduction and optimization of energy demand. Ensuring the high energy performance of buildings emerges as a top priority, with measures aimed at cutting energy consumptions. Concurrently, it is imperative to decrease greenhouse gas emissions by using renewable energy sources for the on-site energy production, thereby striving for an energy balance leading towards zero-emission buildings. Italy's predominant building stock comprises ancient buildings, many of which hold historical significance and are subject to stringent preservation and conservation regulations. Attaining high levels of energy efficiency and reducing CO2 emissions in such buildings poses a considerable challenge, given their unique characteristics and the imperative to adhere to principles of conservation and restoration. Additionally, conducting a meticulous analysis of these buildings' current state is crucial for accurately quantifying their energy performance and predicting the potential impacts of proposed renovation strategies on energy consumption reduction. Within this framework, the paper presents a pilot case in Rome, outlining a methodological approach for the renovation of historic buildings towards achieving Zero Emission Building (ZEB) objective. The building has a mixed function with offices, a conference hall, and an exposition area. The building envelope is made of historical and precious materials used as cladding which must be preserved. A thorough understanding of the building's current condition serves as a prerequisite for analyzing its energy performance. This involves conducting comprehensive archival research, undertaking on-site diagnostic examinations to characterize the building envelope and its systems, and evaluating actual energy usage data derived from energy bills. Energy simulations and audit are the first step in the analysis with the assessment of the energy performance of the actual current state. Subsequently, different renovation scenarios are proposed, encompassing advanced building techniques, to pinpoint the key actions necessary for improving mechanical systems, automation and control systems, and the integration of renewable energy production. These scenarios entail different levels of renovation, ranging from meeting minimum energy performance goals to achieving the highest possible energy efficiency level. The proposed interventions are meticulously analyzed and compared to ascertain the feasibility of attaining the Zero Emission Building objective. In conclusion, the paper provides valuable insights that can be extrapolated to inform a broader approach towards energy-efficient refurbishment of historical buildings that may have limited potential for renovation in their building envelopes. By adopting a methodical and nuanced approach, it is possible to reconcile the imperative of preserving cultural heritage with the pressing need to transition towards a sustainable, low-carbon future.Keywords: energy conservation and transition, energy efficiency in historical buildings, buildings energy performance, energy retrofitting, zero emission buildings, energy simulation
Procedia PDF Downloads 6815691 Benefits of Therapeutic Climbing on Multiple Components of Attention in Attention Deficit Hyperactivity Disorder Children
Authors: Elaheh Hosseini, Otmar Bock, Monika Thomas
Abstract:
The purpose of the present study was to determine the effect of climbing therapy on the components of attention of children with attention-deficit hyperactivity disorder (ADHD). Forty children with ADHD were assigned to either an intervention group or a control group. The exercise group participated in a climbing therapy program for ten weeks, whereas no intervention was administered to the control group. All two groups were then assessed with the same battery of attention tests used in our earlier study. We found that compared to the ‘intervention’ group, performance was higher in the ‘control’ group on tests of sustained, divided and distributed attention, on all four tests. The intervention group showed a significant improvement in components of attention after ten weeks. From this we conclude that climbing therapy can improve the attention of children with ADHD and can be considered as a promising intervention and a standalone treatment for children with ADHD.Keywords: ADHD, climbing therapy, distributed attention, divided attention, selective attention, sustained attention
Procedia PDF Downloads 16215690 Numerical Simulation of the Effect of Single and Dual Synthetic Jet on Stall Phenomenon On NACA (National Advisory Committee for Aeronautics) GA(W)-2 Airfoil
Authors: Abbasali Abouei Mehrizi, Hamid Hassanzadeh Afrouzi
Abstract:
Reducing the drag force increases the efficiency of the aircraft and its better performance. Flow control methods delay the phenomenon of flow separation and consequently reduce the reversed flow phenomenon in the separation region and enhance the performance of the lift force while decreasing the drag force and thus improving the aircraft efficiency. Flow control methods can be divided into active and passive types. The use of synthetic jets actuator (SJA) used in this study for NACA GA (W) -2 airfoil is one of the active flow control methods to prevent stall phenomenon on the airfoil. In this research, the relevant airfoil in different angles of attack with and without jets has been compared by OpenFOAM. Also, after achieving the proper SJA position on the airfoil suction surface, the simultaneous effect of two SJAs has been discussed. It was found to have the best effect at 12% chord (C), close to the airfoil’s leading edge (LE). At 12% chord, SJA decreases the drag significantly with increasing lift, and also, the average lift increase was higher than other situations and was equal to 10.4%. The highest drag reduction was about 5% in SJA=0.25C. Then, due to the positive effects of SJA in the 12% and 25% chord regions, these regions were considered for applying dual jets in two post-stall angles of attack, i.e., 16° and 22°.Keywords: active and passive flow control methods, computational fluid dynamics, flow separation, synthetic jet
Procedia PDF Downloads 8315689 Effect of Women`s Autonomy on Unmet Need for Contraception and Family Size in India
Authors: Anshita Sharma
Abstract:
India is one of the countries to initiate family planning with intention to control the growing population by reducing fertility. In effort to this, India had introduced the National family planning programme in 1952. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services as in NFHS-1 the unmet need for limiting, spacing and total was 46 percent, 14 percent & 9 percent, respectively. The demand for spacing has reduced to at 8 percent, 8 percent for limiting and total unmet need was 16 percent in NFHS-2. The total unmet need has reduced to 13 percent in NFHS-3 for all currently married women and the demand for limiting and spacing is 7 percent and 6 percent respectively. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services. Despite the progress, there is chunk of women who are deprived of controlling unintended and unwanted pregnancies. The present paper examines the socio-cultural and economic and demographic correlates of unmet need for contraception in India. It also examines the effect of women’s autonomy and unmet need for contraception on family size among different socio-economic groups of population. It uses data from national family health survey-3 carried out in 2005-06 and employs bi-variate techniques and multivariate techniques for analysis. The multiple regression analysis has done to seek the level and direction of relationship among various socio-economic and demographic factors. The result reveals that women with higher level of education and economic status have low level of unmet need for family planning. Women living in non-nuclear family have high unmet need for spacing and women living in nuclear family have high unmet need for limiting and family size is slightly higher of women of nuclear family. In India, the level of autonomy varies at different life point; usually women with higher age enjoy higher autonomy than their junior female member in the family. The finding shows that women with higher autonomy have large family size counter to women with low autonomy have low family size. Unmet need for family planning decrease with women’s increasing exposure to mass- media. The demographic factors like experience of child loss are directly related to family size. Women who experience higher child loss have low unmet need for spacing and limiting. Thus, It is established with the help that women’s autonomy status play substantial role in fulfilling demand of contraception for limiting and spacing which affect the family size.Keywords: family size, socio-economic correlates, unmet need for limiting, unmet need for spacing, women`s autonomy
Procedia PDF Downloads 26715688 Analog Voltage Inverter Drive for Capacitive Load with Adaptive Gain Control
Authors: Sun-Ki Hong, Yong-Ho Cho, Ki-Seok Kim, Tae-Sam Kang
Abstract:
Piezoelectric actuator is treated as RC load when it is modeled electrically. For some piezoelectric actuator applications, arbitrary voltage is required to actuate. Especially for unidirectional arbitrary voltage driving like as sine wave, some special inverter with circuit that can charge and discharge the capacitive energy can be used. In this case, the difference between power supply level and the object voltage level for RC load is varied. Because the control gain is constant, the controlled output is not uniform according to the voltage difference. In this paper, for charge and discharge circuit for unidirectional arbitrary voltage driving for piezoelectric actuator, the controller gain is controlled according to the voltage difference. With the proposed simple idea, the load voltage can have controlled smoothly although the voltage difference is varied. The appropriateness is proved from the simulation of the proposed circuit.Keywords: analog voltage inverter, capacitive load, gain control, dc-dc converter, piezoelectric, voltage waveform
Procedia PDF Downloads 65515687 Influence of Chelators, Zn Sulphate and Silicic Acid on Productivity and Meat Quality of Fattening Pigs
Authors: A. Raceviciute-Stupeliene, V. Sasyte, V. Viliene, V. Slausgalvis, J. Al-Saifi, R. Gruzauskas
Abstract:
The objective of this study was to investigate the influence of special additives such as chelators, zinc sulphate and silicic acid on productivity parameters, carcass characteristics and meat quality of fattening pigs. The test started with 40 days old fattening pigs (mongrel (mother) and Yorkshire (father)) and lasted up to 156 days of age. During the fattening period, 32 pigs were divided into 2 groups (control and experimental) with 4 replicates (total of 8 pens). The pigs were fed for 16 weeks’ ad libitum with a standard wheat-barley-soybean meal compound (Control group) supplemented with chelators, zinc sulphate and silicic acid (dosage 2 kg/t of feed, Experimental group). Meat traits in live pigs were measured by ultrasonic equipment Piglog 105. The results obtained throughout the experimental period suggest that supplementation of chelators, zinc sulphate and silicic acid tend to positively affect average daily gain and feed conversion ratio of pigs for fattening (p < 0.05). Pigs’ evaluation with Piglog 105 showed that thickness of fat in the first and second point was by 4% and 3% respectively higher in comparison to the control group (p < 0.05). Carcass weight, yield, and length, also thickness of fat showed no significant difference among the groups. The water holding capacity of meat in Experimental group was lower by 5.28%, and tenderness – lower by 12% compared with that of the pigs in the Control group (p < 0.05). Regarding pigs’ meat chemical composition of the experimental group, a statistically significant difference comparing with the data of the control group was not determined. Cholesterol concentration in muscles of pigs fed diets supplemented with chelators, zinc sulphate and silicic acid was lower by 7.93 mg/100 g of muscle in comparison to that of the control group. These results suggest that supplementation of chelators, zinc sulphate and silicic acid in the feed for fattening pigs had significant effect on pigs growing performance and meat quality.Keywords: silicic acid, chelators, meat quality, pigs, zinc sulphate
Procedia PDF Downloads 18015686 Research on the Function Optimization of China-Hungary Economic and Trade Cooperation Zone
Authors: Wenjuan Lu
Abstract:
China and Hungary have risen from a friendly and comprehensive cooperative relationship to a comprehensive strategic partnership in recent years, and the economic and trade relations between the two countries have developed smoothly. As an important country along the ‘Belt and Road’, Hungary and China have strong economic complementarities and have unique advantages in carrying China's industrial transfer and economic transformation and development. The construction of the China-Hungary Economic and Trade Cooperation Zone, which was initiated by the ‘Sino-Hungarian Borsod Industrial Zone’ and the ‘Hungarian Central European Trade and Logistics Cooperation Park’ has promoted infrastructure construction, optimized production capacity, promoted industrial restructuring, and formed brand and agglomeration effects. Enhancing the influence of Chinese companies in the European market has also promoted economic development in Hungary and even in Central and Eastern Europe. However, as the China-Hungary Economic and Trade Cooperation Zone is still in its infancy, there are still shortcomings such as small scale, single function, and no prominent platform. In the future, based on the needs of China's cooperation with ‘17+1’ and China-Hungary cooperation, on the basis of appropriately expanding the scale of economic and trade cooperation zones and appropriately increasing the number of economic and trade cooperation zones, it is better to focus on optimizing and adjusting its functions and highlighting different economic and trade cooperation. The differentiated function of the trade zones strengthens the multi-faceted cooperation of economic and trade cooperation zones and highlights its role as a platform for cooperation in information, capital, and services.Keywords: ‘One Belt, One Road’ Initiative, China-Hungary economic and trade cooperation zone, function optimization, Central and Eastern Europe
Procedia PDF Downloads 18015685 A User-Directed Approach to Optimization via Metaprogramming
Authors: Eashan Hatti
Abstract:
In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.Keywords: optimization, metaprogramming, logic programming, abstraction
Procedia PDF Downloads 8815684 Vr-GIS and Ar-GIS In Education: A Case Study
Authors: Ilario Gabriele Gerloni, Vincenza Carchiolo, Alessandro Longheu, Ugo Becciani, Eva Sciacca, Fabio Vitello
Abstract:
ICT tools and platforms endorse more and more educational process. Many models and techniques for people to be educated and trained about specific topics and skills do exist, as classroom lectures with textbooks, computers, handheld devices and others. The choice to what extent ICT is applied within learning contexts is related to personal access to technologies as well as to the infrastructure surrounding environment. Among recent techniques, the adoption of Virtual Reality (VR) and Augmented Reality (AR) provides significant impulse in fully engaging users senses. In this paper, an application of AR/VR within Geographic Information Systems (GIS) context is presented. It aims to provide immersive environment experiences for educational and training purposes (e.g. for civil protection personnel), useful especially for situations where real scenarios are not easily accessible by humans. First acknowledgments are promising for building an effective tool that helps civil protection personnel training with risk reduction.Keywords: education, virtual reality, augmented reality, GIS, civil protection
Procedia PDF Downloads 17815683 A Comparative Assessment Method For Map Alignment Techniques
Authors: Rema Daher, Theodor Chakhachiro, Daniel Asmar
Abstract:
In the era of autonomous robot mapping, assessing the goodness of the generated maps is important, and is usually performed by aligning them to ground truth. Map alignment is difficult for two reasons: first, the query maps can be significantly distorted from ground truth, and second, establishing what constitutes ground truth for different settings is challenging. Most map alignment techniques to this date have addressed the first problem, while paying too little importance to the second. In this paper, we propose a benchmark dataset, which consists of synthetically transformed maps with their corresponding displacement fields. Furthermore, we propose a new system for comparison, where the displacement field of any map alignment technique can be computed and compared to the ground truth using statistical measures. The local information in displacement fields renders the evaluation system applicable to any alignment technique, whether it is linear or not. In our experiments, the proposed method was applied to different alignment methods from the literature, allowing for a comparative assessment between them all.Keywords: assessment methods, benchmark, image deformation, map alignment, robot mapping, robot motion
Procedia PDF Downloads 11915682 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set
Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques
Procedia PDF Downloads 41715681 Detection of Extrusion Blow Molding Defects by Airflow Analysis
Authors: Eva Savy, Anthony Ruiz
Abstract:
In extrusion blow molding, there is great variability in product quality due to the sensitivity of the machine settings. These variations lead to unnecessary rejects and loss of time. Yet production control is a major challenge for companies in this sector to remain competitive within their market. Current quality control methods only apply to finished products (vision control, leak test...). It has been shown that material melt temperature, blowing pressure, and ambient temperature have a significant impact on the variability of product quality. Since blowing is a key step in the process, we have studied this parameter in this paper. The objective is to determine if airflow analysis allows the identification of quality problems before the full completion of the manufacturing process. We conducted tests to determine if it was possible to identify a leakage defect and an obstructed defect, two common defects on products. The results showed that it was possible to identify a leakage defect by airflow analysis.Keywords: extrusion blow molding, signal, sensor, defects, detection
Procedia PDF Downloads 15115680 Aspects of Tone in the Educated Nigeria Accent of English
Authors: Nkereke Essien
Abstract:
The study seeks to analyze tone in the Educated Nigerian accent of English (ENAE) using the three tones: Low (L), High (H) and Low-High (LH). The aim is to find out whether there are any differences or similarities in the performance of the experimental group and the control. To achieve this, twenty educated Nigerian speakers of English who are educated in the language were selected by a Stratified Random Sampling (SRS) technique from two federal universities in Nigeria. They were given a passage to read and their intonation patterns were compared with that of a native speaker (control). The data were analyzed using Pierrehumbert’s (1980) intonation system of analysis. Three different approaches were employed in the analysis of the intonation Phrase (IP) as used by Pierrehumbert: perceptual, statistical and acoustic. We first analyzed our data from the passage and utterances using Willcoxon Matched Pairs Signs Ranks Test to establish the differences between the performance of the experimental group and the control. Then, the one-way Analysis of variance (ANOVA) statistical and Tukey-Krammar Post Hoc Tests were used to test for any significant difference in the performances of the twenty subjects. The acoustic data were presented to corroborate both the perceptual and statistical findings. Finally, the tonal patterns of the selected subjects in the three categories - A, B, C, were compared with those of the control. Our findings revealed that the tonal pattern of the Educated Nigerian Accent of English (ENAE) is significantly different from the tonal pattern of the Standard British Accent of English (SBAE) as represented by the control. A high preference for unidirectional tones, especially, the high tones was observed in the performance of the experimental group. Also, high tones do not necessarily correspond to stressed syllables and low tones to unstressed syllables.Keywords: accent, intonation phrase (IP), tonal patterns, tone
Procedia PDF Downloads 23015679 Parameter Estimation with Uncertainty and Sensitivity Analysis for the SARS Outbreak in Hong Kong
Authors: Afia Naheed, Manmohan Singh, David Lucy
Abstract:
This work is based on a mathematical as well as statistical study of an SEIJTR deterministic model for the interpretation of transmission of severe acute respiratory syndrome (SARS). Based on the SARS epidemic in 2003, the parameters are estimated using Runge-Kutta (Dormand-Prince pairs) and least squares methods. Possible graphical and numerical techniques are used to validate the estimates. Then effect of the model parameters on the dynamics of the disease is examined using sensitivity and uncertainty analysis. Sensitivity and uncertainty analytical techniques are used in order to analyze the affect of the uncertainty in the obtained parameter estimates and to determine which parameters have the largest impact on controlling the disease dynamics.Keywords: infectious disease, severe acute respiratory syndrome (SARS), parameter estimation, sensitivity analysis, uncertainty analysis, Runge-Kutta methods, Levenberg-Marquardt method
Procedia PDF Downloads 361