Search results for: product optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6538

Search results for: product optimization

268 A Critical Analysis of the Current Concept of Healthy Eating and Its Impact on Food Traditions

Authors: Carolina Gheller Miguens

Abstract:

Feeding is, and should be, pleasurable for living beings so they desire to nourish themselves while preserving the continuity of the species. Social rites usually revolve around the table and are closely linked to the cultural traditions of each region and social group. Since the beginning, food has been closely linked with the products each region provides, and, also, related to the respective seasons of production. With the globalization and facilities of modern life we are able to find an ever increasing variety of products at any time of the year on supermarket shelves. These lifestyle changes end up directly influencing food traditions. With the era of uncontrolled obesity caused by the dazzle with the large and varied supply of low-priced to ultra-processed industrial products now in the past, today we are living a time when people are putting aside the pleasure of eating to exclusively eat food dictated by the media as healthy. Recently the medicalization of food in our society has become so present in daily life that almost without realizing we make food choices conditioned to the studies of the properties of these foods. The fact that people are more attentive to their health is interesting. However, when this care becomes an obsessive disorder, which imposes itself on the pleasure of eating and extinguishes traditional customs, it becomes dangerous for our recognition as citizens belonging to a culture and society. This new way of living generates a rupture with the social environment of origin, possibly exposing old traditions to oblivion after two or three generations. Based on these facts, the presented study analyzes these social transformations that occur in our society that triggered the current medicalization of food. In order to clarify what is actually a healthy diet, this research proposes a critical analysis on the subject aiming to understand nutritional rationality and relate how it acts in the medicalization of food. A wide bibliographic review on the subject was carried out followed by an exploratory research in online (especially social) media, a relevant source in this context due to the perceived influence of such media in contemporary eating habits. Finally, this data was crossed, critically analyzing the current situation of the concept of healthy eating and medicalization of food. Throughout this research, it was noticed that people are increasingly seeking information about the nutritional properties of food, but instead of seeking the benefits of products that traditionally eat in their social environment, they incorporate external elements that often bring benefits similar to the food already consumed. This is because the access to information is directed by the media and exalts the exotic, since this arouses more interest of the population in general. Efforts must be made to clarify that traditional products are also healthy foods, rich in history, memory and tradition and cannot be replaced by a standardized diet little concerned with the construction of taste and pleasure, having a relationship with food as if it were a Medicinal product.

Keywords: food traditions, food transformations, healthy eating, medicalization of food

Procedia PDF Downloads 298
267 Comparative Review Of Models For Forecasting Permanent Deformation In Unbound Granular Materials

Authors: Shamsulhaq Amin

Abstract:

Unbound granular materials (UGMs) are pivotal in ensuring long-term quality, especially in the layers under the surface of flexible pavements and other constructions. This study seeks to better understand the behavior of the UGMs by looking at popular models for predicting lasting deformation under various levels of stresses and load cycles. These models focus on variables such as the number of load cycles, stress levels, and features specific to materials and were evaluated on the basis of their ability to accurately predict outcomes. The study showed that these factors play a crucial role in how well the models work. Therefore, the research highlights the need to look at a wide range of stress situations to more accurately predict how much the UGMs bend or shift. The research looked at important factors, like how permanent deformation relates to the number of times a load is applied, how quickly this phenomenon happens, and the shakedown effect, in two different types of UGMs: granite and limestone. A detailed study was done over 100,000 load cycles, which provided deep insights into how these materials behave. In this study, a number of factors, such as the level of stress applied, the number of load cycles, the density of the material, and the moisture present were seen as the main factors affecting permanent deformation. It is vital to fully understand these elements for better designing pavements that last long and handle wear and tear. A series of laboratory tests were performed to evaluate the mechanical properties of materials and acquire model parameters. The testing included gradation tests, CBR tests, and Repeated load triaxial tests. The repeated load triaxial tests were crucial for studying the significant components that affect deformation. This test involved applying various stress levels to estimate model parameters. In addition, certain model parameters were established by regression analysis, and optimization was conducted to improve outcomes. Afterward, the material parameters that were acquired were used to construct graphs for each model. The graphs were subsequently compared to the outcomes obtained from the repeated load triaxial testing. Additionally, the models were evaluated to determine if they demonstrated the two inherent deformation behaviors of materials when subjected to repetitive load: the initial phase, post-compaction, and the second phase volumetric changes. In this study, using log-log graphs was key to making the complex data easier to understand. This method made the analysis clearer and helped make the findings easier to interpret, adding both precision and depth to the research. This research provides important insight into picking the right models for predicting how these materials will act under expected stress and load conditions. Moreover, it offers crucial information regarding the effect of load cycle and permanent deformation as well as the shakedown effect on granite and limestone UGMs.

Keywords: permanent deformation, unbound granular materials, load cycles, stress level

Procedia PDF Downloads 17
266 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 66
265 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 83
264 Web-Based Instructional Program to Improve Professional Development: Recommendations and Standards for Radioactive Facilities in Brazil

Authors: Denise Levy, Gian M. A. A. Sordi

Abstract:

This web based project focuses on continuing corporate education and improving workers' skills in Brazilian radioactive facilities throughout the country. The potential of Information and Communication Technologies (ICTs) shall contribute to improve the global communication in this very large country, where it is a strong challenge to ensure high quality professional information to as many people as possible. The main objective of this system is to provide Brazilian radioactive facilities a complete web-based repository - in Portuguese - for research, consultation and information, offering conditions for learning and improving professional and personal skills. UNIPRORAD is a web based system to offer unified programs and inter-related information about radiological protection programs. The content includes the best practices for radioactive facilities in order to meet both national standards and international recommendations published by different organizations over the past decades: International Commission on Radiological Protection (ICRP), International Atomic Energy Agency (IAEA) and National Nuclear Energy Commission (CNEN). The website counts on concepts, definitions and theory about optimization and ionizing radiation monitoring procedures. Moreover, the content presents further discussions related to some national and international recommendations, such as potential exposure, which is currently one of the most important research fields in radiological protection. Only two publications of ICRP develop expressively the issue and there is still a lack of knowledge of fail probabilities, for there are still uncertainties to find effective paths to quantify probabilistically the occurrence of potential exposures and the probabilities to reach a certain level of dose. To respond to this challenge, this project discusses and introduces potential exposures in a more quantitative way than national and international recommendations. Articulating ICRP and AIEA valid recommendations and official reports, in addition to scientific papers published in major international congresses, the website discusses and suggests a number of effective actions towards safety which can be incorporated into labor practice. The WEB platform was created according to corporate public needs, taking into account the development of a robust but flexible system, which can be easily adapted to future demands. ICTs provide a vast array of new communication capabilities and allow to spread information to as many people as possible at low costs and high quality communication. This initiative shall provide opportunities for employees to increase professional skills, stimulating development in this large country where it is an enormous challenge to ensure effective and updated information to geographically distant facilities, minimizing costs and optimizing results.

Keywords: distance learning, information and communication technology, nuclear science, radioactive facilities

Procedia PDF Downloads 174
263 A Geoprocessing Tool for Early Civil Work Notification to Optimize Fiber Optic Cable Installation Cost

Authors: Hussain Adnan Alsalman, Khalid Alhajri, Humoud Alrashidi, Abdulkareem Almakrami, Badie Alguwaisem, Said Alshahrani, Abdullah Alrowaished

Abstract:

Most of the cost of installing a new fiber optic cable is attributed to civil work-trenching-cost. In many cases, information technology departments receive project proposals in their eReview system, but not all projects are visible to everyone. Additionally, if there was no IT scope in the proposed project, it is not likely to be visible to IT. Sometimes it is too late to add IT scope after project budgets have been finalized. Finally, the eReview system is a repository of PDF files for each project, which commits the reviewer to manual work and limits automation potential. This paper details a solution to address the late notification of the eReview system by integrating IT Sites GIS data-sites locations-with land use permit (LUP) data-civil work activity, which is the first step before securing the required land usage authorizations and means no detailed designs for any relevant project before an approved LUP request. To address the manual nature of eReview system, both the LUP System and IT data are using ArcGIS Desktop, which enables the creation of a geoprocessing tool with either Python or Model Builder to automate finding and evaluating potentially usable LUP requests to reduce trenching between two sites in need of a new FOC. To achieve this, a weekly dump was taken from LUP system production data and loaded manually onto ArcMap Desktop. Then a custom tool was developed in model builder, which consisted of a table of two columns containing all the pairs of sites in need of new fiber connectivity. The tool then iterates all rows of this table, taking the sites’ pair one at a time and finding potential LUPs between them, which satisfies the provided search radius. If a group of LUPs was found, an iterator would go through each LUP to find the required civil work between the two sites and the LUP Polyline feature and the distance through the line, which would be counted as cost avoidance if an IT scope had been added. Finally, the tool will export an Excel file named with sites pair, and it will contain as many rows as the number of LUPs, which met the search radius containing trenching and pulling information and cost. As a result, multiple projects have been identified – historical, missed opportunity, and proposed projects. For the proposed project, the savings were about 75% ($750,000) to install a new fiber with the Euclidean distance between Abqaiq GOSP2 and GOSP3 DCOs. In conclusion, the current tool setup identifies opportunities to bundle civil work on single projects at a time and between two sites. More work is needed to allow the bundling of multiple projects between two sites to achieve even more cost avoidance in both capital cost and carbon footprint.

Keywords: GIS, fiber optic cable installation optimization, eliminate redundant civil work, reduce carbon footprint for fiber optic cable installation

Procedia PDF Downloads 199
262 The Use of Flipped Classroom as a Teaching Method in a Professional Master's Program in Network, in Brazil

Authors: Carla Teixeira, Diana Azevedo, Jonatas Bessa, Maria Guilam

Abstract:

The flipped classroom is a blended learning modality that combines face-to-face and virtual activities of self-learning, mediated by digital information and communication technologies, which reverses traditional teaching approaches and presents, as a presupposition, the previous study of contents by students. In the following face-to-face activities, the contents are discussed, producing active learning. This work aims to describe the systematization process of the use of flipped classrooms as a method to develop complementary national activities in PROFSAÚDE, a professional master's program in the area of public health, offered as a distance learning course, in the network, in Brazil. The complementary national activities were organized with the objective of strengthening and qualifying students´ learning process. The network gathers twenty-two public institutions of higher education in the country. Its national coordination conducted a survey to detect complementary educational needs, supposed to improve the formative process and align important content sums for the program nationally. The activities were organized both asynchronously, making study materials available in Google classrooms, and synchronously in a tele presential way, organized on virtual platforms to reach the largest number of students in the country. The asynchronous activities allowed each student to study at their own pace and the synchronous activities were intended for deepening and reflecting on the themes. The national team identified some professors' areas of expertise, who were contacted for the production of audiovisual content such as video classes and podcasts, guidance for supporting bibliographic materials and also to conduct synchronous activities together with the technical team. The contents posted in the virtual classroom were organized by modules and made available before the synchronous meeting; these modules, in turn, contain “pills of experience” that correspond to reports of teachers' experiences in relation to the different themes. In addition, activity was proposed, with questions aimed to expose doubts about the contents and a learning challenge, as a practical exercise. Synchronous activities are built with different invited teachers, based on the participants 'discussions, and are the forum where teachers can answer students' questions, providing feedback on the learning process. At the end of each complementary activity, an evaluation questionnaire is available. The responses analyses show that this institutional network experience, as pedagogical innovation, provides important tools to support teaching and research due to its potential in the participatory construction of learning, optimization of resources, the democratization of knowledge and sharing and strengthening of practical experiences on the network. One of its relevant aspects was the thematic diversity addressed through this method.

Keywords: active learning, flipped classroom, network education experience, pedagogic innovation

Procedia PDF Downloads 138
261 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 63
260 Nonviolent Communication and Disciplinary Area of Social Communication: Case Study on the International Circulation of Ideas from a Brazilian Perspective

Authors: Luiza Toschi

Abstract:

This work presents part of an empirical and theoretical master's degree meta-research that is interested in the relationship between the disciplinary area of Social Communication, to be investigated with the characteristics of the Bourdieusian scientific field, and the emergence of public interest in Nonviolent Communication (NVC) in Brazil and the world. To this end, the state of the art of this conceptual and practical relationship is investigated based on scientific productions available in spaces of academic credibility, such as conferences and scientific journals renowned in the field. From there, agents and the sociological aspects that make them contribute or not to scientific production in Brazil and the world are mapped. In this work, a brief dive into the international context is presented to understand if and how nonviolent communication permeates scientific production in communication in a systematic way. Using three accessible articles published between 2013 and 2022 in the 117 magazines classified as Quartiles Q1 in the Journal Ranking of Communication, the international production on the subject is compared with the Brazilian one from its context. The social conditions of the international circulation of ideas are thus discussed. Science is a product of its social environment, arising from relations of interest and power that compete in the political dimension at the same time as in the epistemological dimension. In this way, scientific choices are linked to the resources mobilized from or through the prestige and recognition of peers. In this sense, an object of interest stands out to a scientist for its academic value, but also and inseparably that which has a social interest within the collective, their social stratification, and the context of legitimacy created in their surroundings, influenced by cultural universalism. In Brazil, three published articles were found in congresses and journals that mention NVC in their abstract or keywords. All were written by Public Relations undergraduate students. Between the most experienced researchers who guided or validated the publications, it is possible to find two professionals who are interested in the Culture of Peace and Dialogy. Likewise, internationally, only three of the articles found mention the term in their abstract or title. Two analyze journalistic coverage based on the principles of NVC and Journalism for Peace. The third is from one of the Brazilian researchers identified as interested in dialogic practices, who analyses audiovisual material and promotes epistemological reflections. If, on the one hand, some characteristics inside and outside Brazil are similar: small samples, relationship with peace studies, and female researchers, two of whom are Brazilian, on the other hand, differences are obvious. If within the country, the subject is mostly Organizational Communication, outside this intersection, it is not presented explicitly. Furthermore, internationally, there is an interest in analyzing from the perspective of NVC, which has not been found so far in publications in Brazil. Up to the present moment, it is possible to presume that, universally, the legitimacy of the topic is sought by its association with conflict conciliation research and communication for peace.

Keywords: academic field sociology, international circulation of ideas, meta research in communication, nonviolent communication

Procedia PDF Downloads 18
259 Techno Economic Analysis of CAES Systems Integrated into Gas-Steam Combined Plants

Authors: Coriolano Salvini

Abstract:

The increasing utilization of renewable energy sources for electric power production calls for the introduction of energy storage systems to match the electric demand along the time. Although many countries are pursuing as a final goal a “decarbonized” electrical system, in the next decades the traditional fossil fuel fed power plant still will play a relevant role in fulfilling the electric demand. Presently, such plants provide grid ancillary services (frequency control, grid balance, reserve, etc.) by adapting the output power to the grid requirements. An interesting option is represented by the possibility to use traditional plants to improve the grid storage capabilities. The present paper is addressed to small-medium size systems suited for distributed energy storage. The proposed Energy Storage System (ESS) is based on a Compressed Air Energy Storage (CAES) integrated into a Gas-Steam Combined Cycle (GSCC) or a Gas Turbine based CHP plants. The systems can be incorporated in an ex novo built plant or added to an already existing one. To avoid any geological restriction related to the availability of natural compressed air reservoirs, artificial storage is addressed. During the charging phase, electric power is absorbed from the grid by an electric driven intercooled/aftercooled compressor. In the course of the discharge phase, the compressed stored air is sent to a heat transfer device fed by hot gas taken upstream the Heat Recovery Steam Generator (HRSG) and subsequently expanded for power production. To maximize the output power, a staged reheated expansion process is adopted. The specific power production related to the kilogram per second of exhaust gas used to heat the stored air is two/three times larger than that achieved if the gas were used to produce steam in the HRSG. As a result, a relevant power augmentation is attained with respect to normal GSCC plant operations without additional use of fuel. Therefore, the excess of output power can be considered “fuel free” and the storage system can be compared to “pure” ESSs such as electrochemical, pumped hydro or adiabatic CAES. Representative cases featured by different power absorption, production capability, and storage capacity have been taken into consideration. For each case, a technical optimization aimed at maximizing the storage efficiency has been carried out. On the basis of the resulting storage pressure and volume, number of compression and expansion stages, air heater arrangement and process quantities found for each case, a cost estimation of the storage systems has been performed. Storage efficiencies from 0.6 to 0.7 have been assessed. Capital costs in the range of 400-800 €/kW and 500-1000 €/kWh have been estimated. Such figures are similar or lower to those featuring alternative storage technologies.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), gas steam combined cycle (GSCC), techno-economic analysis

Procedia PDF Downloads 190
258 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 240
257 Integrative-Cyclical Approach to the Study of Quality Control of Resource Saving by the Use of Innovation Factors

Authors: Anatoliy A. Alabugin, Nikolay K. Topuzov, Sergei V. Aliukov

Abstract:

It is well known, that while we do a quantitative evaluation of the quality control of some economic processes (in particular, resource saving) with help innovation factors, there are three groups of problems: high uncertainty of indicators of the quality management, their considerable ambiguity, and high costs to provide a large-scale research. These problems are defined by the use of contradictory objectives of enhancing of the quality control in accordance with innovation factors and preservation of economic stability of the enterprise. The most acutely, such factors are felt in the countries lagging behind developed economies of the world according to criteria of innovativeness and effectiveness of management of the resource saving. In our opinion, the following two methods for reconciling of the above-mentioned objectives and reducing of conflictness of the problems are to solve this task most effectively: 1) the use of paradigms and concepts of evolutionary improvement of quality of resource-saving management in the cycle "from the project of an innovative product (technology) - to its commercialization and update parameters of customer value"; 2) the application of the so-called integrative-cyclical approach which consistent with complexity and type of the concept, to studies allowing to get quantitative assessment of the stages of achieving of the consistency of these objectives (from baseline of imbalance, their compromise to achievement of positive synergies). For implementation, the following mathematical tools are included in the integrative-cyclical approach: index-factor analysis (to identify the most relevant factors); regression analysis of relationship between the quality control and the factors; the use of results of the analysis in the model of fuzzy sets (to adjust the feature space); method of non-parametric statistics (for a decision on the completion or repetition of the cycle in the approach in depending on the focus and the closeness of the connection of indicator ranks of disbalance of purposes). The repetition is performed after partial substitution of technical and technological factors ("hard") by management factors ("soft") in accordance with our proposed methodology. Testing of the proposed approach has shown that in comparison with the world practice there are opportunities to improve the quality of resource-saving management using innovation factors. We believe that the implementation of this promising research, to provide consistent management decisions for reducing the severity of the above-mentioned contradictions and increasing the validity of the choice of resource-development strategies in terms of parameters of quality management and sustainability of enterprise, is perspective. Our existing experience in the field of quality resource-saving management and the achieved level of scientific competence of the authors allow us to hope that the use of the integrative-cyclical approach to the study and evaluation of the resulting and factor indicators will help raise the level of resource-saving characteristics up to the value existing in the developed economies of post-industrial type.

Keywords: integrative-cyclical approach, quality control, evaluation, innovation factors. economic sustainability, innovation cycle of management, disbalance of goals of development

Procedia PDF Downloads 227
256 From Forked Tongues to Tinkerbell Ears: Rethinking the Criminalization of Alternative Body Modification in the UK

Authors: Luci V. Hyett

Abstract:

The criminal law of England and Wales currently deems that a person cannot consent to the infliction of injury upon their own body, where the level of harm is considered to be Actual or Grevious. This renders the defence of consent of the victim as being unavailable to those persons carrying out an Alternative Body Modification procedure. However, the criminalization of consensual injury is more appropriately deemed as being categorized as an offense against public morality and not one against the person, which renders the State’s involvement in the autonomous choices of a consenting adult, when determining what can be done to one’s own body, an arbitrary one. Furthermore, to recognise in law that a person is capable of giving a valid consent to socially acceptable cosmetic interventions that largely consist of procedures designed to aesthetically please men and, not those of people who want to modify their bodies for other reasons means that patriarchal attitudes are continuing to underpin public repulsion and inhibit social acceptance of such practices. Theoretical analysis will begin with a juridical examination of R v M(B) [2019] QB 1 where the High Court determined that Alternative Body Modification was not a special category exempting a person so performing from liability for Grevious Bodily Harm using the defence of consent. It will draw from its reasoning which considered that ‘the removal of body parts were medical procedures being carried out for no medical reason by someone not qualified to carry them out’ which will form the basis of this enquiry. It will consider the philosophical work of Georgio Agamben when analysing whether the biopolitical climate in the UK, which places the optimization of the perfect, healthy body at the centre of political concern can explain why those persons who wish to engage in Alternative Body Modification are treated as the ‘Exception’ to that which is normal using the ‘no medical reason’ canon to justify criminalisation, rather than legitimising the industry through regulation. It will consider, through a feminist lens, the current conflict in law between traditional cosmetic interventions which alter one’s physical appearance for socially accepted aesthetic purposes such as those to the breast, lip and buttock and, modifications described as more outlandish such as earlobe stretching, tooth filing and transdermal implants to create horns and spikes under the skin. It will assert that ethical principles relating to the psychological impact of body modification described as ‘alternative’ is used as a means to exclude person’s seeking such a procedure from receiving safe and competent treatment via a registered cosmetic surgeon which leads to these increasingly popular surgery’s being performed in Tattoo parlours throughout the UK as an extension to other socially acceptable forms of self-modification such as piercings. It will contend that only by ‘inclusive exclusion’ will those ‘othered’ through ostracisation be welcomed into the fold of normality and this can only be achieved through recognition of alternative body modification as a legitimate cosmetic intervention, subject to the same regulatory framework as existing practice. This would assist in refocusing the political landscape by erring on the side of liberty rather than that of biology.

Keywords: biopolitics, body modification, consent, criminal law

Procedia PDF Downloads 90
255 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 236
254 Supermarket Shoppers Perceptions to Genetically Modified Foods in Trinidad and Tobago: Focus on Health Risks and Benefits

Authors: Safia Hasan Varachhia, Neela Badrie, Marsha Singh

Abstract:

Genetic modification of food is an innovative technology that offers a host of benefits and advantages to consumers. Consumer attitudes towards GM food and GM technologies can be identified a major determinant in conditioning market force and encouraging policy makers and regulators to recognize the significance of consumer influence on the market. This study aimed to investigate and evaluate the extent of consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks and benefit in Trinidad and Tobago, West Indies. The specific objectives of this study were to (determine consumer awareness to GM foods, ascertain their perspectives on health and safety risks and ethical issues associated with GM foods and determine whether labeling of GM foods and ingredients will influence consumers’ willingness to purchase GM foods. A survey comprising of a questionnaire consisting of 40 questions, both open-ended and close-ended was administered to 240 shoppers in small, medium and large-scale supermarkets throughout Trinidad between April-May, 2015 using convenience sampling. This survey investigated consumer awareness, knowledge, perception and acceptance of GM foods and its associated health risks/benefits. The data was analyzed using SPSS 19.0 and Minitab 16.0. One-way ANOVA investigated the effects categories of supermarkets and knowledge scores on shoppers’ awareness, knowledge, perception and acceptance of GM foods. Linear Regression tested whether demographic variables (category of supermarket, age of consumer, level of were useful predictors of consumer’s knowledge of GM foods). More than half of respondents (64.3%) were aware of GM foods and GM technologies, 28.3% of consumers indicated the presence of GM foods in local supermarkets and 47.1% claimed to be knowledgeable of GM foods. Furthermore, significant associations (P < 0.05) were observed between demographic variables (age, income, and education), and consumer knowledge of GM foods. Also, significant differences (P < 0.05) were observed between demographic variables (education, gender, and income) and consumer knowledge of GM foods. In addition, age, education, gender and income (P < 0.05) were useful predictors of consumer knowledge of GM foods. There was a contradiction as whilst 35% of consumers considered GM foods safe for consumption, 70% of consumers were wary of the unknown health risks of GM foods. About two-thirds of respondents (67.5%) considered the creation of GM foods morally wrong and unethical. Regarding GM food labeling preferences, 88% of consumers preferred mandatory labeling of GM foods and 67% of consumers specified that any food product containing a trace of GM food ingredients required mandatory GM labeling. Also, despite the declaration of GM food ingredients on food labels and the reassurance of its safety for consumption by food safety and regulatory institutions, the majority of consumers (76.1%) still preferred conventionally produced foods over GM foods. The study revealed the need to inform shoppers of the presence of GM foods and technologies, present the scientific evidence as to the benefits and risks and the need for a policy on labeling so that informed choices could be taken.

Keywords: genetically modified foods, income, labeling consumer awareness, ingredients, morality and ethics, policy

Procedia PDF Downloads 308
253 Production of Functional Crackers Enriched with Olive (Olea europaea L.) Leaf Extract

Authors: Rosa Palmeri, Julieta I. Monteleone, Antonio C. Barbera, Carmelo Maucieri, Aldo Todaro, Virgilio Giannone, Giovanni Spagna

Abstract:

In recent years, considerable interest has been shown in the functional properties of foods, and a relevant role has been played by phenolic compounds, able to scavenge free radicals. A more sustainable agriculture has to emerge to guarantee food supply over the next years. Wheat, corn, and rice are the most common cereals cultivated, but also other cereal species, such as barley can be appreciated for their peculiarities. Barley (Hordeum vulgare L.) is a C3 winter cereal that shows high resistance at drought and salt stresses. There are growing interests in barley as ingredient for the production of functional foods due to its high content of phenolic compounds and Beta-glucans. In this respect, the possibility of separating specific functional fractions from food industry by-products looks very promising. Olive leaves represent a quantitatively significant by-product of olive grove farming, and are an interesting source of phenolic compounds. In particular, oleuropein, which provide important nutritional benefits, is the main phenolic compound in olive leaves and ranges from 17% to 23% depending upon the cultivar and growing season period. Together with oleuropein and its derivatives (e.g. dimethyloleuropein, oleuropein diglucoside), olive leaves further contain tyrosol, hydroxytyrosol, and a series of secondary metabolities structurally related to them: verbascoside, ligstroside, hydroxytyrosol glucoside, tyrosol glucoside, oleuroside, oleoside-11-methyl ester, and nuzhenide. Several flavonoids, flavonoid glycosides, and phenolic acids have also described in olive leaves. The aim of this work was the production of functional food with higher content of polyphenols and the evaluation of their shelf life. Organic durum wheat and barley grains contain higher levels of phenolic compounds were used for the production of crackers. Olive leaf extract (OLE) was obtained from cv. ‘Biancolilla’ by aqueous extraction method. Two baked goods trials were performed with both organic durum wheat and barley flours, adding olive leaf extract. Control crackers, made as comparison, were produced with the same formulation replacing OLE with water. Total phenolic compound, moisture content, activity water, and textural properties at different time of storage were determined to evaluate the shelf-life of the products. Our the preliminary results showed that the enriched crackers showed higher phenolic content and antioxidant activity than control. Alternative uses of olive leaf extracts for crackers production could represent a good candidate for the addition of functional ingredients because bakery items are daily consumed, and have long shelf-life.

Keywords: barley, functional foods, olive leaf, polyphenols, shelf life

Procedia PDF Downloads 281
252 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking

Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz

Abstract:

Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.

Keywords: anatomy, e-learning, virtual reality, 3D model marking

Procedia PDF Downloads 68
251 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells

Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau

Abstract:

Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.

Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability

Procedia PDF Downloads 49
250 Managing Human-Wildlife Conflicts Compensation Claims Data Collection and Payments Using a Scheme Administrator

Authors: Eric Mwenda, Shadrack Ngene

Abstract:

Human-wildlife conflicts (HWCs) are the main threat to conservation in Africa. This is because wildlife needs overlap with those of humans. In Kenya, about 70% of wildlife occurs outside protected areas. As a result, wildlife and human range overlap, causing HWCs. The HWCs in Kenya occur in the drylands adjacent to protected areas. The top five counties with the highest incidences of HWC are Taita Taveta, Narok, Lamu, Kajiado, and Laikipia. The common wildlife species responsible for HWCs are elephants, buffaloes, hyenas, hippos, leopards, baboons, monkeys, snakes, and crocodiles. To ensure individuals affected by the conflicts are compensated, Kenya has developed a model of HWC compensation claims data collection and payment. We collected data on HWC from all eight Kenya Wildlife Service (KWS) Conservation Areas from 2009 to 2019. Additional data was collected from stakeholders' consultative workshops held in the Conservation Areas and a literature review regarding payment of injuries and ongoing insurance schemes being practiced in areas. This was followed by the description of the claims administration process and calculation of the pricing of the compensation claims. We further developed a digital platform for data capture and processing of all reported conflict cases and payments. Our product recognized four categories of HWC (i.e., human death and injury, property damage, crop destruction, and livestock predation). Personal bodily injury and human death were provided based on the Continental Scale of Benefits. We proposed a maximum of Kenya Shillings (KES) 3,000,000 for death. Medical, pharmaceutical, and hospital expenses were capped at a maximum of KES 150,000, as well as funeral costs at KES 50,000. Pain and suffering were proposed to be paid for 12 months at the rate of KES 13,500 per month. Crop damage was to be based on farm input costs at a maximum of KES 150,000 per claim. Livestock predation leading to death was based on Tropical Livestock Unit (TLU), which is equivalent to KES 30,000, whick includes Cattle (1 TLU = KES 30,000), Camel (1.4 TLU = KES 42,000), Goat (0.15 TLU = 4,500), Sheep (0.15 TLU = 4,500), and Donkey (0.5 TLU = KES 15,000). Property destruction (buildings, outside structures and harvested crops) was capped at KES 150,000 per any one claim. We conclude that it is possible to use an administrator to collect data on HWC compensation claims and make payments using technology. The success of the new approach will depend on a piloting program. We recommended that a pilot scheme be initiated for eight months in Taita Taveta, Kajiado, Baringo, Laikipia, Narok, and Meru Counties. This will test the claims administration process as well as harmonize data collection methods. The results of this pilot will be crucial in adjusting the scheme before country-wide roll out.

Keywords: human-wildlife conflicts, compensation, human death and injury, crop destruction, predation, property destruction

Procedia PDF Downloads 30
249 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 235
248 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 225
247 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning

Authors: Chia Wei Lim, Ning Yan

Abstract:

The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.

Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning

Procedia PDF Downloads 73
246 Analysis of Electric Mobility in the European Union: Forecasting 2035

Authors: Domenico Carmelo Mongelli

Abstract:

The context is that of great uncertainty in the 27 countries belonging to the European Union which has adopted an epochal measure: the elimination of internal combustion engines for the traction of road vehicles starting from 2035 with complete replacement with electric vehicles. If on the one hand there is great concern at various levels for the unpreparedness for this change, on the other the Scientific Community is not preparing accurate studies on the problem, as the scientific literature deals with single aspects of the issue, moreover addressing the issue at the level of individual countries, losing sight of the global implications of the issue for the entire EU. The aim of the research is to fill these gaps: the technological, plant engineering, environmental, economic and employment aspects of the energy transition in question are addressed and connected to each other, comparing the current situation with the different scenarios that could exist in 2035 and in the following years until total disposal of the internal combustion engine vehicle fleet for the entire EU. The methodologies adopted by the research consist in the analysis of the entire life cycle of electric vehicles and batteries, through the use of specific databases, and in the dynamic simulation, using specific calculation codes, of the application of the results of this analysis to the entire EU electric vehicle fleet from 2035 onwards. Energy balance sheets will be drawn up (to evaluate the net energy saved), plant balance sheets (to determine the surplus demand for power and electrical energy required and the sizing of new plants from renewable sources to cover electricity needs), economic balance sheets (to determine the investment costs for this transition, the savings during the operation phase and the payback times of the initial investments), the environmental balances (with the different energy mix scenarios in anticipation of 2035, the reductions in CO2eq and the environmental effects are determined resulting from the increase in the production of lithium for batteries), the employment balances (it is estimated how many jobs will be lost and recovered in the reconversion of the automotive industry, related industries and in the refining, distribution and sale of petroleum products and how many will be products for technological innovation, the increase in demand for electricity, the construction and management of street electric columns). New algorithms for forecast optimization are developed, tested and validated. Compared to other published material, the research adds an overall picture of the energy transition, capturing the advantages and disadvantages of the different aspects, evaluating the entities and improvement solutions in an organic overall picture of the topic. The results achieved allow us to identify the strengths and weaknesses of the energy transition, to determine the possible solutions to mitigate these weaknesses and to simulate and then evaluate their effects, establishing the most suitable solutions to make this transition feasible.

Keywords: engines, Europe, mobility, transition

Procedia PDF Downloads 39
245 Preliminary Study of Water-Oil Separation Process in Three-Phase Separators Using Factorial Experimental Designs and Simulation

Authors: Caroline M. B. De Araujo, Helenise A. Do Nascimento, Claudia J. Da S. Cavalcanti, Mauricio A. Da Motta Sobrinho, Maria F. Pimentel

Abstract:

Oil production is often followed by the joint production of water and gas. During the journey up to the surface, due to severe conditions of temperature and pressure, the mixing between these three components normally occurs. Thus, the three phases separation process must be one of the first steps to be performed after crude oil extraction, where the water-oil separation is the most complex and important step, since the presence of water into the process line can increase corrosion and hydrates formation. A wide range of methods can be applied in order to proceed with oil-water separation, being more commonly used: flotation, hydrocyclones, as well as the three phase separator vessels. Facing what has been presented so far, it is the aim of this paper to study a system consisting of a three-phase separator, evaluating the influence of three variables: temperature, working pressure and separator type, for two types of oil (light and heavy), by performing two factorial design plans 23, in order to find the best operating condition. In this case, the purpose is to obtain the greatest oil flow rate in the product stream (m3/h) as well as the lowest percentage of water in the oil stream. The simulation of the three-phase separator was performed using Aspen Hysys®2006 simulation software in stationary mode, and the evaluation of the factorial experimental designs was performed using the software Statistica®. From the general analysis of the four normal probability plots of effects obtained, it was observed that interaction effects of two and three factors did not show statistical significance at 95% confidence, since all the values were very close to zero. Similarly, the main effect "separator type" did not show significant statistical influence in any situation. As in this case, it has been assumed that the volumetric flow of water, oil and gas were equal in the inlet stream, the effect separator type, in fact, may not be significant for the proposed system. Nevertheless, the main effect “temperature” was significant for both responses (oil flow rate and mass fraction of water in the oil stream), considering both light and heavy oil, so that the best operation condition occurs with the temperature at its lowest level (30oC), since the higher the temperature, the liquid oil components pass into the vapor phase, going to the gas stream. Furthermore, the higher the temperature, the higher the formation water vapor, so that ends up going into the lighter stream (oil stream), making the separation process more difficult. Regarding the “working pressure”, this effect showed to be significant only for the oil flow rate, so that the best operation condition occurs with the pressure at its highest level (9bar), since a higher operating pressure, in this case, indicated a lower pressure drop inside the vessel, generating lower level of turbulence inside the separator. In conclusion, the best-operating condition obtained for the proposed system, at the studied range, occurs for temperature is at its lowest level and the working pressure is at its highest level.

Keywords: factorial experimental design, oil production, simulation, three-phase separator

Procedia PDF Downloads 252
244 Intermodal Strategies for Redistribution of Agrifood Products in the EU: The Case of Vegetable Supply Chain from Southeast of Spain

Authors: Juan C. Pérez-Mesa, Emilio Galdeano-Gómez, Jerónimo De Burgos-Jiménez, José F. Bienvenido-Bárcena, José F. Jiménez-Guerrero

Abstract:

Environmental cost and transport congestion on roads resulting from product distribution in Europe have to lead to the creation of various programs and studies seeking to reduce these negative impacts. In this regard, apart from other institutions, the European Commission (EC) has designed plans in recent years promoting a more sustainable transportation model in an attempt to ultimately shift traffic from the road to the sea by using intermodality to achieve a model rebalancing. This issue proves especially relevant in supply chains from peripheral areas of the continent, where the supply of certain agrifood products is high. In such cases, the most difficult challenge is managing perishable goods. This study focuses on new approaches that strengthen the modal shift, as well as the reduction of externalities. This problem is analyzed by attempting to promote intermodal system (truck and short sea shipping) for transport, taking as point of reference highly perishable products (vegetables) exported from southeast Spain, which is the leading supplier to Europe. Methodologically, this paper seeks to contribute to the literature by proposing a different and complementary approach to establish a comparison between intermodal and the “only road” alternative. For this purpose, the multicriteria decision is utilized in a p-median model (P-M) adapted to the transport of perishables and to a means of shipping selection problem, which must consider different variables: transit cost, including externalities, time, and frequency (including agile response time). This scheme avoids bias in decision-making processes. By observing the results, it can be seen that the influence of the externalities as drivers of the modal shift is reduced when transit time is introduced as a decision variable. These findings confirm that the general strategies, those of the EC, based on environmental benefits lose their capacity for implementation when they are applied to complex circumstances. In general, the different estimations reveal that, in the case of perishables, intermodality would be a secondary and viable option only for very specific destinations (for example, Hamburg and nearby locations, the area of influence of London, Paris, and the Netherlands). Based on this framework, the general outlook on this subject should be modified. Perhaps the government should promote specific business strategies based on new trends in the supply chain, not only on the reduction of externalities, and find new approaches that strengthen the modal shift. A possible option is to redefine ports, conceptualizing them as digitalized redistribution and coordination centers and not only as areas of cargo exchange.

Keywords: environmental externalities, intermodal transport, perishable food, transit time

Procedia PDF Downloads 76
243 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication

Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons

Abstract:

The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.

Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture

Procedia PDF Downloads 146
242 Co-Smoldered Digestate Ash as Additive for Anaerobic Digestion of Berry Fruit Waste: Stability and Enhanced Production Rate

Authors: Arinze Ezieke, Antonio Serrano, William Clarke, Denys Villa-Gomez

Abstract:

Berry cultivation results in discharge of high organic strength putrescible solid waste which potentially contributes to environmental degradation, making it imperative to assess options for its complete management. Anaerobic digestion (AD) could be an ideal option when the target is energy generation; however, due to berry fruit characteristics high carbohydrate composition, the technology could be limited by its high alkalinity requirement which suggests dosing of additives such as buffers and trace elements supplement. Overcoming this limitation in an economically viable way could entail replacement of synthetic additives with recycled by-product waste. Consequently, ash from co-smouldering of high COD characteristic AD digestate and coco-coir could be a promising material to be used to enhance the AD of berry fruit waste, given its characteristic high pH, alkalinity and metal concentrations which is typical of synthetic additives. Therefore, the aim of the research was to evaluate the stability and process performance from the AD of BFW when ash from co-smoldered digestate and coir are supplemented as alkalinity and trace elements (TEs) source. Series of batch experiments were performed to ascertain the necessity for alkalinity addition and to see whether the alkalinity and metals in the co-smouldered digestate ash can provide the necessary buffer and TEs for AD of berry fruit waste. Triplicate assays were performed in batch systems following I/S of 2 (in VS), using serum bottles (160 mL) sealed and placed in a heated room (35±0.5 °C), after creating anaerobic conditions. Control experiment contained inoculum and substrates only, and inoculum, substrate and NaHCO3 for optimal total alkalinity concentration and TEs assays, respectively. Total alkalinity concentration refers to alkalinity of inoculum and the additives. The alkalinity and TE potential of the ash were evaluated by supplementing ash (22.574 g/kg) of equivalent total alkalinity concentration to that of the pre-determined optimal from NaHCO3, and by dosing ash (0.012 – 7.574 g/kg) of varying concentrations of specific essential TEs (Co, Fe, Ni, Se), respectively. The result showed a stable process at all examined conditions. Supplementation of 745 mg/L CaCO3 NaHCO3 resulted to an optimum TAC of 2000 mg/L CaCO3. Equivalent ash supplementation of 22.574 g/kg allowed the achievement of this pre-determined optimum total alkalinity concentration, resulting to a stable process with a 92% increase in the methane production rate (323 versus 168 mL CH4/ (gVS.d)), but a 36% reduction in the cumulative methane production (103 versus 161 mL CH4/gVS). Addition of ashes at incremental dosage as TEs source resulted to a reduction in the Cumulative methane production, with the highest dosage of 7.574 g/kg having the highest effect of -23.5%; however, the seemingly immediate bioavailability of TE at this high dosage allowed for a +15% increase in the methane production rate. With an increased methane production rate, the results demonstrated that the ash at high dosages could be an effective supplementary material for either a buffered or none buffered berry fruit waste AD system.

Keywords: anaerobic digestion, alkalinity, co-smoldered digestate ash, trace elements

Procedia PDF Downloads 100
241 The Antioxidant Activity of Grape Chkhaveri and Its Wine Cultivated in West Georgia (Adjaria)

Authors: Maia Kharadze, Indira Djaparidze, Maia Vanidze, Aleko Kalandia

Abstract:

Modern scientific world studies chemical components and antioxidant activity of different kinds of vines according to their breed purity and location. To our knowledge, this kind of research has not been conducted in Georgia yet. The object of our research was to study Chkhaveri vine, which is included in the oldest varieties of the Black Sea basin vine. We have studied different-altitude Chkaveri grapes, juice, and wine (half dry rose-colored produced with European technologies) and their technical markers, qualitative and quantitive composition of their biologically active compounds and their antioxidant activity. We were determining the amount of phenols using Folin-Ciocalteu reagent, Flavonoids, Catechins and Anthocyanins using Spectral method and antioxidant activity using DPPH method. Several compounds were identified using –HPLC-UV-Vis, UPLC-MS methods. Six samples of Chkhaveri species– 5, 300, 360, 380, 400, 780 meter altitudes were taken and analyzed. The sample taken from 360 m altitude is distinguished by its cluster mass (383.6 grams) and high amount of sugar (20.1%). The sample taken from the five-meter altitude is distinguished by having high acidity (0.95%). Unlike other grapes varieties, such concentration of sugar and relatively low levels of citric acid ultimately leads to Chkhaveri wine individuality. Biologically active compounds of Chkhaveri were researched in 2014, 2015, 2016. The amount of total phenols in samples of 2016 fruit varies from 976.7 to 1767.0 mg/kg. Amount of Anthocians is 721.2-1630.2 mg/kg, and the amount of Flavanoids varies from 300.6 to 825.5 mg/kg. Relatively high amount of anthocyanins was found in the Chkhaveri at 780-meter altitude - 1630.2 mg/kg. Accordingly, the amount of Phenols and Flavanoids is high- 1767.9 mg/kg and 825.5 mg/kg. These characteristics are low in samples gathered from 5 meters above sea level, Anthocyanins-721.2 mg/ kg, total Phenols-976.7 mg/ kg, and Flavanoids-300.6 mg/kg. The highest amount of bioactive compounds can be found in the Chkhaveri samples of high altitudes because with rising height environment becomes harsh, the plant has to develop a better immune system using Phenolic compounds. The technology that is used for the production of wine also plays a huge role in the composition of the final product. Optimal techniques of maceration and ageing were worked out. While squeezing Chkhaveri, there are no anthocyanins in the juice. However, the amount of Anthocyanins rises during maceration. After the fermentation of dregs, the amount of anthocyanins is 55%, 521.3 gm/l, total Phenols 80% 1057.7 mg/l and Flavanoids 23.5 mg/l. Antioxidant activity of samples was also determined using the effect of 50% inhibition of the samples. All samples have high antioxidant activity. For instance, in samples at 780 meters above the sea-level antioxidant activity was 53.5%. It is relatively high compared to the sample at 5 m above sea-level with the antioxidant activity of 30.5%. Thus, there is a correlation between the amount Anthocyanins and antioxidant activity. The designated project has been fulfilled by financial support of the Georgia National Science Foundation (Grant AP/96/13, Grant 216816), Any idea in this publication is possessed by the author and may not represent the opinion of the Georgia National Science Foundation.

Keywords: antioxidants, bioactive content, wine, chkhaveri

Procedia PDF Downloads 204
240 Liquid Waste Management in Cluster Development

Authors: Abheyjit Singh, Kulwant Singh

Abstract:

There is a gradual depletion of the water table in the earth's crust, and it is required to converse and reduce the scarcity of water. This is only done by rainwater harvesting, recycling of water and by judicially consumption/utilization of water and adopting unique treatment measures. Domestic waste is generated in residential areas, commercial settings, and institutions. Waste, in general, is unwanted, undesirable, and nevertheless an inevitable and inherent product of social, economic, and cultural life. In a cluster, a need-based system is formed where the project is designed for systematic analysis, collection of sewage from the cluster, treating it and then recycling it for multifarious work. The liquid waste may consist of Sanitary sewage/ Domestic waste, Industrial waste, Storm waste, or Mixed Waste. The sewage contains both suspended and dissolved particles, and the total amount of organic material is related to the strength of the sewage. The untreated domestic sanitary sewage has a BOD (Biochemical Oxygen Demand) of 200 mg/l. TSS (Total Suspended Solids) about 240 mg/l. Industrial Waste may have BOD and TSS values much higher than those of sanitary sewage. Another type of impurities of wastewater is plant nutrients, especially when there are compounds of nitrogen N phosphorus P in the sewage; raw sanitary contains approx. 35 mg/l Nitrogen and 10 mg/l of Phosphorus. Finally, the pathogen in the waste is expected to be proportional to the concentration of facial coliform bacteria. The coliform concentration in raw sanitary sewage is roughly 1 billion per liter. The system of sewage disposal technique has been universally applied to all conditions, which are the nature of soil formation, Availability of land, Quantity of Sewage to be disposed of, The degree of treatment and the relative cost of disposal technique. The adopted Thappar Model (India) has the following designed parameters consisting of a Screen Chamber, a Digestion Tank, a Skimming Tank, a Stabilization Tank, an Oxidation Pond and a Water Storage Pond. The screening Chamber is used to remove plastic and other solids, The Digestion Tank is designed as an anaerobic tank having a retention period of 8 hours, The Skimming Tank has an outlet that is kept 1 meter below the surface anaerobic condition at the bottom and also help in organic solid remover, Stabilization Tank is designed as primary settling tank, Oxidation Pond is a facultative pond having a depth of 1.5 meter, Storage Pond is designed as per the requirement. The cost of the Thappar model is Rs. 185 Lakh per 3,000 to 4,000 population, and the Area required is 1.5 Acre. The complete structure will linning as per the requirement. The annual maintenance will be Rs. 5 lakh per year. The project is useful for water conservation, silage water for irrigation, decrease of BOD and there will be no longer damage to community assets and economic loss to the farmer community by inundation. There will be a healthy and clean environment in the community.

Keywords: collection, treatment, utilization, economic

Procedia PDF Downloads 56
239 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 92