Search results for: models.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2526

Search results for: models.

66 Scope, Relevance and Sustainability of Decentralized Renewable Energy Systems in Developing Economies: Imperatives from Indian Case Studies

Authors: Harshit Vallecha, Prabha Bhola

Abstract:

‘Energy for all’, is a global issue of concern for the past many years. Despite the number of technological advancements and innovations, significant numbers of people are living without access to electricity around the world. India, an emerging economy, tops the list of nations having the maximum number of residents living off the grid, thus raising global attention in past few years to provide clean and sustainable energy access solutions to all of its residents. It is evident from developed economies that centralized planning and electrification alone is not sufficient for meeting energy security. Implementation of off-grid and consumer-driven energy models like Decentralized Renewable Energy (DRE) systems have played a significant role in meeting the national energy demand in developed nations. Cases of DRE systems have been reported in developing countries like India for the past few years. This paper attempts to profile the status of DRE projects in the Indian context with their scope and relevance to ensure universal electrification. Diversified cases of DRE projects, particularly solar, biomass and micro hydro are identified in different Indian states. Critical factors affecting the sustainability of DRE projects are extracted with their interlinkages in the context of developers, beneficiaries and promoters involved in such projects. Socio-techno-economic indicators are identified through similar cases in the context of DRE projects. Exploratory factor analysis is performed to evaluate the critical sustainability factors followed by regression analysis to establish the relationship between the dependent and independent factors. The generated EFA-Regression model provides a basis to develop the sustainability and replicability framework for broader coverage of DRE projects in developing nations in order to attain the goal of universal electrification with least carbon emissions.

Keywords: Climate change, decentralized generation, electricity access, renewable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1012
65 Comparison of Multivariate Adaptive Regression Splines and Random Forest Regression in Predicting Forced Expiratory Volume in One Second

Authors: P. V. Pramila, V. Mahesh

Abstract:

Pulmonary Function Tests are important non-invasive diagnostic tests to assess respiratory impairments and provides quantifiable measures of lung function. Spirometry is the most frequently used measure of lung function and plays an essential role in the diagnosis and management of pulmonary diseases. However, the test requires considerable patient effort and cooperation, markedly related to the age of patients resulting in incomplete data sets. This paper presents, a nonlinear model built using Multivariate adaptive regression splines and Random forest regression model to predict the missing spirometric features. Random forest based feature selection is used to enhance both the generalization capability and the model interpretability. In the present study, flow-volume data are recorded for N= 198 subjects. The ranked order of feature importance index calculated by the random forests model shows that the spirometric features FVC, FEF25, PEF, FEF25-75, FEF50 and the demographic parameter height are the important descriptors. A comparison of performance assessment of both models prove that, the prediction ability of MARS with the `top two ranked features namely the FVC and FEF25 is higher, yielding a model fit of R2= 0.96 and R2= 0.99 for normal and abnormal subjects. The Root Mean Square Error analysis of the RF model and the MARS model also shows that the latter is capable of predicting the missing values of FEV1 with a notably lower error value of 0.0191 (normal subjects) and 0.0106 (abnormal subjects) with the aforementioned input features. It is concluded that combining feature selection with a prediction model provides a minimum subset of predominant features to train the model, as well as yielding better prediction performance. This analysis can assist clinicians with a intelligence support system in the medical diagnosis and improvement of clinical care.

Keywords: FEV1, Multivariate Adaptive Regression Splines Pulmonary Function Test, Random Forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3741
64 Conceptual Design of the TransAtlantic as a Research Platform for the Development of “Green” Aircraft Technologies

Authors: Victor Maldonado

Abstract:

Recent concerns of the growing impact of aviation on climate change has prompted the emergence of a field referred to as Sustainable or “Green” Aviation dedicated to mitigating the harmful impact of aviation related CO2 emissions and noise pollution on the environment. In the current paper, a unique “green” business jet aircraft called the TransAtlantic was designed (using analytical formulation common in conceptual design) in order to show the feasibility for transatlantic passenger air travel with an aircraft weighing less than 10,000 pounds takeoff weight. Such an advance in fuel efficiency will require development and integration of advanced and emerging aerospace technologies. The TransAtlantic design is intended to serve as a research platform for the development of technologies such as active flow control. Recent advances in the field of active flow control and how this technology can be integrated on a sub-scale flight demonstrator are discussed in this paper. Flow control is a technique to modify the behavior of coherent structures in wall-bounded flows (over aerodynamic surfaces such as wings and turbine nozzles) resulting in improved aerodynamic cruise and flight control efficiency. One of the key challenges to application in manned aircraft is development of a robust high-momentum actuator that can penetrate the boundary layer flowing over aerodynamic surfaces. These deficiencies may be overcome in the current development and testing of a novel electromagnetic synthetic jet actuator which replaces piezoelectric materials as the driving diaphragm. One of the overarching goals of the TranAtlantic research platform include fostering national and international collaboration to demonstrate (in numerical and experimental models) reduced CO2/ noise pollution via development and integration of technologies and methodologies in design optimization, fluid dynamics, structures/ composites, propulsion, and controls.

Keywords: Aircraft Design, Sustainable “Green” Aviation, Active Flow Control, Aerodynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535
63 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: Sensor, electricity sub-meters, database, energy anomaly detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
62 A Six-Year Case Study Evaluating the Stakeholders’ Requirements and Satisfaction in Higher Educational Establishments

Authors: Ioannis I. Αngeli

Abstract:

Worldwide and mainly in the European Union, many standards, regulations, models and systems exists for the evaluation and identification of stakeholders’ requirements of individual universities and higher education (HE) in general. All systems are targeting to measure or evaluate the Universities’ Quality Assurance Systems and the services offered to the recipients of HE, mainly the students. Numerous surveys were conducted in the past either by each university or by organized bodies to identify the students’ satisfaction or to evaluate to what extent these requirements are fulfilled. In this paper, the main results of an ongoing 6-year joint research will be presented very briefly. This research deals with an in depth investigation of student’s satisfaction, students personal requirements, a cup analysis among these two parameters and compares different universities. Through this research an attempt will be made to address four very important questions in higher education establishments (HEE): (1) Are there any common requirements, parameters, good practices or questions that apply to a large number of universities that will assure that students’ requirements are fulfilled? (2) Up to what extent the individual programs of HEE fulfil the requirements of the stakeholders? (3) Are there any similarities on specific programs among European HEE? (4) To what extent the knowledge acquired in a specific course program is utilized or used in a specific country? For the execution of the research an internationally accepted questionnaire(s) was used to evaluate up to what extent the students’ requirements and satisfaction were fulfilled in 2012 and five years later (2017). Samples of students and or universities were taken from many European Universities. The questionnaires used, the sampling method and methodology adopted, as well as the comparison tables and results will be very valuable to any university that is willing to follow the same route and methodology or compare the results with their own HHE. Apart from the unique methodology, valuable results are demonstrated from the four case studies. There is a great difference between the student’s expectations or importance from what they are getting from their universities (in all parameters they are getting less). When there is a crisis or budget cut in HEE there is a direct impact to students. There are many differences on subjects taught in European universities.

Keywords: Quality in higher education, students’ requirements, education standards, student’s survey, stakeholder’s requirements, Mechanical Engineering courses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
61 Polymer Mediated Interaction between Grafted Nanosheets

Authors: Supriya Gupta, Paresh Chokshi

Abstract:

Polymer-particle interactions can be effectively utilized to produce composites that possess physicochemical properties superior to that of neat polymer. The incorporation of fillers with dimensions comparable to polymer chain size produces composites with extra-ordinary properties owing to very high surface to volume ratio. The dispersion of nanoparticles is achieved by inducing steric repulsion realized by grafting particles with polymeric chains. A comprehensive understanding of the interparticle interaction between these functionalized nanoparticles plays an important role in the synthesis of a stable polymer nanocomposite. With the focus on incorporation of clay sheets in a polymer matrix, we theoretically construct the polymer mediated interparticle potential for two nanosheets grafted with polymeric chains. The self-consistent field theory (SCFT) is employed to obtain the inhomogeneous composition field under equilibrium. Unlike the continuum models, SCFT is built from the microscopic description taking in to account the molecular interactions contributed by both intra- and inter-chain potentials. We present the results of SCFT calculations of the interaction potential curve for two grafted nanosheets immersed in the matrix of polymeric chains of dissimilar chemistry to that of the grafted chains. The interaction potential is repulsive at short separation and shows depletion attraction for moderate separations induced by high grafting density. It is found that the strength of attraction well can be tuned by altering the compatibility between the grafted and the mobile chains. Further, we construct the interaction potential between two nanosheets grafted with diblock copolymers with one of the blocks being chemically identical to the free polymeric chains. The interplay between the enthalpic interaction between the dissimilar species and the entropy of the free chains gives rise to a rich behavior in interaction potential curve obtained for two separate cases of free chains being chemically similar to either the grafted block or the free block of the grafted diblock chains.

Keywords: Clay nanosheets, polymer brush, polymer nanocomposites, self-consistent field theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1475
60 An AI-Generated Semantic Communication Platform in Human-Computer Interaction Course

Authors: Yi Yang, Jiasong Sun

Abstract:

Almost every aspect of our daily lives is now intertwined with some degree of Human-Computer Interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology and more. The HCI courses in the Department of Electronics at Tsinghua University, known as the Media and Cognition course, is constantly updated to reflect the most advanced technological advances, such as virtual reality, augmented reality and artificial intelligence-based interaction. For more than a decade, this course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which has gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. The latest version of the HCI course practices a semantic communication platform based on AI-generated techniques. We explored a student-centered model and proposed an interest-based teaching method. Students are no longer just recipients of knowledge, but become active participants in the learning process driven by personal interests, thereby encouraging students to take responsibility for their own education. One of the latest results of this teaching approach in the course "Media and Cognition" is a student proposal to develop a semantic communication platform rooted in artificial intelligence generative technologies. The platform solves a key challenge in communications technology: the ability to preserve visual signals. The interest-based approach emphasizes personal curiosity and active participation, and the proposal of an artificial intelligence-generated semantic communication platform is an example and successful result of how students can exert greater creativity when they have the power to control their own learning.

Keywords: Human-computer interaction, media and cognition course, semantic communication, retain ability, prompts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 195
59 Client Satisfaction: Does Private or Public Health Sector Make a Difference? Results from Secondary Data Analysis in Sindh, Pakistan

Authors: Wajiha Javed, Arsalan Jabbar, Nelofer Mehboob, Muhammad Tafseer, Zahid Memon

Abstract:

Introduction: Researchers globally have strived to explore diverse factors that augment the continuation and uptake of family planning methods. Clients’ satisfaction is one of the core determinants facilitating continuation of family planning methods. There is a major debate yet scanty evidence to contrast public and private sectors with respect to client satisfaction. The objective of this study is to compare quality-of-care provided by public and private sectors of Pakistan through a client satisfaction lens. Methods: We used Pakistan Demographic Heath Survey 2012-13 dataset on 3133 women. Ten different multivariate models were made. to explore the relationship between client satisfaction and dependent outcome after adjusting for all known confounding factors and results are presented as OR and AOR (95% CI). Results: Multivariate analyses showed that clients were less satisfied in contraceptive provision from private sector as compared to public sector (AOR 0.92, 95% CI 0.63-1.68) even though the result was not statistically significant. Clients were more satisfied from private sector as compared to the public sector with respect to other determinants of quality-of-care follow-up care (AOR 3.29, 95% CI 1.95-5.55), infection prevention (AOR 2.41, 95% CI 1.60-3.62), counseling services (AOR 2.01, 95% CI 1.27-3.18, timely treatment (AOR 3.37, 95% CI 2.20-5.15), attitude of staff (AOR 2.23, 95% CI 1.50-3.33), punctuality of staff (AOR 2.28, 95% CI 1.92-4.13), timely referring (AOR 2.34, 95% CI 1.63-3.35), staff cooperation (AOR 1.75, 95% CI 1.22-2.51) and complications handling (AOR 2.27, 95% CI 1.56-3.29). Discussion: Public sector has successfully attained substantial satisfaction levels with respect to provision of contraceptives, but it contrasts previous literature from a multi country studies. Our study though in is concordance with a study from Tanzania where public sector was more likely to offer family planning services to clients as compared to private facilities. Conclusion: In majority of the developing countries, public sector is more involved in FP service provision; however, in Pakistan clients’ satisfaction in private sector is more, which opens doors for public-private partnerships and collaboration in the near future. 

Keywords: Client satisfaction, Family Planning, Public private partnership, Quality of care

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2033
58 On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining

Authors: Alexandru Epureanu, Virgil Teodor

Abstract:

One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.

Keywords: Reconfigurable machine tool, system identification, uncut chip area, cutting conditions scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
57 Development and Analysis of a Machine to Equally Apply Mineral Fertilizer to Soil on Slopes

Authors: Qurbanov Huseyn Nuraddin

Abstract:

Reliable food supply of the population of a country is one of the main directions of the state's economic policy. Grain growing, which is the basis of agriculture, is important in this area. In the cultivation of cereals on slopes, the application of equal amounts of mineral fertilizers to under the soil before sowing is a very important technological process. The low level of technical equipment in this area prevents producers from providing the country with the necessary quality cereals. Experience in the operation of modern technical means has shown that at present, there is a need to provide an equal amount of fertilizer to under the soil on slopes, fully meeting the agro-technical requirements. No fundamental changes have been made to the industrial machines that fertilize under the soil, and unequal application of fertilizers to under the soil on slopes has been applied. This technological process leads to the destruction of new seedlings and reduced productivity due to intolerance to frost during the winter for the plant planted in the fall. In special climatic conditions, there is an optimal fertilization rate for each agricultural product. The application of fertilizers to the soil is one of the conditions that increase their efficiency in the field. As can be seen, the development of a new technical proposal for fertilizing and plowing the slopes in equal amounts on the slopes, improving the technological and design parameters, taking into account the physical and mechanical properties of fertilizers, is very important. Taking into account the above-mentioned issues, a combined plough was developed in our laboratory. Combined plough carries out pre-sowing technological operation in the cultivation of cereals, providing a smooth equal amount of mineral fertilizers to under the soil on the slopes. Mathematical models of a smooth spreader that evenly distributes fertilizers in the field have been developed. Thus, diagrams and graphs obtained without distribution on the eight partitions of the smooth spreader are constructed under the inclined angles of the slopes. Percentage and productivity of equal distribution in the field were noted by practical and theoretical analysis.

Keywords: Combined plough, mineral fertilizer, equal sowing, fertilizer norm, grain-crops, sowing fertilizer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 399
56 Understanding Help Seeking among Black Women with Clinically Significant Posttraumatic Stress Symptoms

Authors: Glenda Wrenn, Juliet Muzere, Meldra Hall, Allyson Belton, Kisha Holden, Chanita Hughes-Halbert, Martha Kent, Bekh Bradley

Abstract:

Understanding the help seeking decision making process and experiences of health disparity populations with posttraumatic stress disorder (PTSD) is central to development of trauma-informed, culturally centered, and patient focused services. Yet, little is known about the decision making process among adult Black women who are non-treatment seekers as they are, by definition, not engaged in services. Methods: Audiotaped interviews were conducted with 30 African American adult women with clinically significant PTSD symptoms who were engaged in primary care, but not in treatment for PTSD despite symptom burden. A qualitative interview guide was used to elucidate key themes. Independent coding of themes mapped to theory and identification of emergent themes were conducted using qualitative methods. An existing quantitative dataset was analyzed to contextualize responses and provide a descriptive summary of the sample. Results: Emergent themes revealed that active mental avoidance, the intermittent nature of distress, ambivalence, and self-identified resilience as undermining to help seeking decisions. Participants were stuck within the help-seeking phase of ‘recognition’ of illness and retained a sense of “it is my decision” despite endorsing significant social and environmental negative influencers. Participants distinguished ‘help acceptance’ from ‘help seeking’ with greater willingness to accept help and importance placed on being of help to others. Conclusions: Elucidation of the decision-making process from the perspective of non-treatment seekers has implications for outreach and treatment within models of integrated and specialty systems care. The salience of responses to trauma symptoms and stagnation in the help seeking recognition phase are findings relevant to integrated care service design and community engagement.

Keywords: Culture, help-seeking, integrated care, PTSD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125
55 Stochastic Simulation of Reaction-Diffusion Systems

Authors: Paola Lecca, Lorenzo Dematte

Abstract:

Reactiondiffusion systems are mathematical models that describe how the concentration of one or more substances distributed in space changes under the influence of local chemical reactions in which the substances are converted into each other, and diffusion which causes the substances to spread out in space. The classical representation of a reaction-diffusion system is given by semi-linear parabolic partial differential equations, whose general form is ÔêétX(x, t) = DΔX(x, t), where X(x, t) is the state vector, D is the matrix of the diffusion coefficients and Δ is the Laplace operator. If the solute move in an homogeneous system in thermal equilibrium, the diffusion coefficients are constants that do not depend on the local concentration of solvent and of solutes and on local temperature of the medium. In this paper a new stochastic reaction-diffusion model in which the diffusion coefficients are function of the local concentration, viscosity and frictional forces of solvent and solute is presented. Such a model provides a more realistic description of the molecular kinetics in non-homogenoeus and highly structured media as the intra- and inter-cellular spaces. The movement of a molecule A from a region i to a region j of the space is described as a first order reaction Ai k- → Aj , where the rate constant k depends on the diffusion coefficient. Representing the diffusional motion as a chemical reaction allows to assimilate a reaction-diffusion system to a pure reaction system and to simulate it with Gillespie-inspired stochastic simulation algorithms. The stochastic time evolution of the system is given by the occurrence of diffusion events and chemical reaction events. At each time step an event (reaction or diffusion) is selected from a probability distribution of waiting times determined by the specific speed of reaction and diffusion events. Redi is the software tool, developed to implement the model of reaction-diffusion kinetics and dynamics. It is a free software, that can be downloaded from http://www.cosbi.eu. To demonstrate the validity of the new reaction-diffusion model, the simulation results of the chaperone-assisted protein folding in cytoplasm obtained with Redi are reported. This case study is redrawing the attention of the scientific community due to current interests on protein aggregation as a potential cause for neurodegenerative diseases.

Keywords: Reaction-diffusion systems, Fick's law, stochastic simulation algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
54 Expert Witness Testimony in the Battered Woman Syndrome

Authors: Ana Pauna

Abstract:

The Expert Witness Testimony in the Battered Woman Syndrome Expert witness testimony (EWT) is a kind of information given by an expert specialized in the field (here in BWS) to the jury in order to help the court better understand the case. EWT does not always work in favor of the battered women. Two main decision-making models are discussed in the paper: the Mathematical model and the Explanation model. In the first model, the jurors calculate ″the importance and strength of each piece of evidence″ whereas in the second model they try to integrate the EWT with the evidence and create a coherent story that would describe the crime. The jury often misunderstands and misjudges battered women for their action (or in this case inaction). They assume that these women are masochists and accept being mistreated for if a man abuses a woman constantly, she should and could divorce him or simply leave at any time. The research in the domain found that indeed, expert witness testimony has a powerful influence on juror’s decisions thus its quality needs to be further explored. One of the important factors that need further studies is a bias called the dispositionist worldview (a belief that what happens to people is of their own doing). This kind of attributional bias represents a tendency to think that a person’s behavior is due to his or her disposition, even when the behavior is clearly attributed to the situation. Hypothesis The hypothesis of this paper is that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. The juror would therefore commit the fundamental attribution error and believe that the victim’s disposition caused the rape and not the situation she was in. Methods The subjects in the study were 500 randomly sampled undergraduate students from McGill, Concordia, Université de Montréal and UQAM. Dispositional Worldview was scored on the Dispositionist Worldview Questionnaire. After reading the Rape Scenarios, each student was asked to play the role of a juror and answer a questionnaire consisting of 7 questions about the responsibility, causality and fault of the victim. Results The results confirm the hypothesis which states that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. By doing so, the juror commits the fundamental attribution error because he will believe that the victim’s disposition, and not the constraints or opportunities of the situation, caused the rape scenario.

Keywords: bias, expert/witness testimony, attribution error, jury, rape myth

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2184
53 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines

Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl

Abstract:

Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. This is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that is based on controlintegrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. This paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. It starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art of pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general posedependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.

Keywords: Dynamic behavior, lightweight, machine tool, pose-dependency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2849
52 Formulation and in vitro Evaluation of Ondansetron Hydrochloride Matrix Transdermal Systems Using Ethyl Cellulose/Polyvinyl Pyrrolidone Polymer Blends

Authors: Rajan Rajabalaya, Li-Qun Tor, Sheba David

Abstract:

Transdermal delivery of ondansetron hydrochloride (OdHCl) can prevent the problems encountered with oral ondansetron. In previously conducted studies, effect of amount of polyvinyl pyrrolidone, permeation enhancer and casting solvent on the physicochemical properties on OdHCl were investigated. It is feasible to develop ondansetron transdermal patch by using ethyl cellulose and polyvinyl pyrrolidone with dibutyl pthalate as plasticizer, however, the desired flux is not achieved. The primary aim of this study is to use dimethyl succinate (DMS) and propylene glycol that are not incorporated in previous studies to determine their effect on the physicochemical properties of an OdHCl transdermal patch using ethyl cellulose and polyvinyl pyrrolidone. This study also investigates the effect of permeation enhancer (eugenol and phosphatidylcholine) on the release of OdHCl. The results showed that propylene glycol is a more suitable plasticizer compared to DMS in the fabrication of OdHCl transdermal patch using ethyl cellulose and polyvinyl pyrrolidone as polymers. Propylene glycol containing patch has optimum drug content, thickness, moisture content and water absorption, tensile strength, and a better release profile than DMS. Eugenol and phosphatidylcholine can increase release of OdHCl from the patches. From the physicochemical result and permeation profile, a combination of 350mg of ethyl cellulose, 150mg polyvinyl pyrrolidone, 3% of total polymer weight of eugenol, and 40% of total polymer weight of propylene glycol is the most suitable formulation to develop an OdHCl patch. OdHCl release did not increase with increasing the percentage of plasticiser. DMS 4, PG 4, DMS 9, PG 9, DMS 14, and PG 14 gave better release profiles where using 300mg: 0mg, 300mg: 100mg, and 350mg: 150mg of EC: PVP. Thus, 40% of PG or DMS appeared to be the optimum amount of plasticiser when the above combination where EC: PVP was used. It was concluded from the study that a patch formulation containing 350mg EC, 150mg PVP, 40% PG and 3% eugenol is the best transdermal matrix patch compositions for the uniform and continuous release/permeation of OdHCl over an extended period. This patch design can be used for further pharmacokinetic and pharmacodynamic studies in suitable animal models.

Keywords: Ondansetron hydrochloride, dimethyl succinate, eugenol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2464
51 Parametric Non-Linear Analysis of Reinforced Concrete Frames with Supplemental Damping Systems

Authors: Daniele Losanno, Giorgio Serino

Abstract:

This paper focuses on parametric analysis of reinforced concrete structures equipped with supplemental damping braces. Practitioners still luck sufficient data for current design of damper added structures and often reduce the real model to a pure damper braced structure even if this assumption is neither realistic nor conservative. In the present study, the damping brace is modelled as made by a linear supporting brace connected in series with the viscous/hysteretic damper. Deformation capacity of existing structures is usually not adequate to undergo the design earthquake. In spite of this, additional dampers could be introduced strongly limiting structural damage to acceptable values, or in some cases, reducing frame response to elastic behavior. This work is aimed at providing useful considerations for retrofit of existing buildings by means of supplemental damping braces. The study explicitly takes into consideration variability of (a) relative frame to supporting brace stiffness, (b) dampers’ coefficient (viscous coefficient or yielding force) and (c) non-linear frame behavior. Non-linear time history analysis has been run to account for both dampers’ behavior and non-linear plastic hinges modelled by Pivot hysteretic type. Parametric analysis based on previous studies on SDOF or MDOF linear frames provide reference values for nearly optimal damping systems design. With respect to bare frame configuration, seismic response of the damper-added frame is strongly improved, limiting deformations to acceptable values far below ultimate capacity. Results of the analysis also demonstrated the beneficial effect of stiffer supporting braces, thus highlighting inadequacy of simplified pure damper models. At the same time, the effect of variable damping coefficient and yielding force has to be treated as an optimization problem.

Keywords: Brace stiffness, dissipative braces, non-linear analysis, plastic hinges, reinforced concrete.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 920
50 The Role of User Participation on Social Sustainability: A Case Study on Four Residential Areas

Authors: Hasan Taştan, Ayşen Ciravoğlu

Abstract:

The rapid growth of the human population and the environmental degradation associated with increased consumption of resources raises concerns on sustainability. Social sustainability constitutes one of the three dimensions of sustainability together with environmental and economic dimensions. Even though there is not an agreement on what social sustainability consists of, it is a well known fact that it necessitates user participation. The fore, this study aims to observe and analyze the role of user participation on social sustainability. In this paper, the links between user participation and indicators of social sustainability have been searched. In order to achieve this, first of all a literature review on social sustainability has been done; accordingly, the information obtained from researches has been used in the evaluation of the projects conducted in the developing countries considering user participation. These examples are taken as role models with pros and cons for the development of the checklist for the evaluation of the case studies. Furthermore, a case study over the post earthquake residential settlements in Turkey have been conducted. The case study projects are selected considering different building scales (differing number of residential units), scale of the problem (post-earthquake settlements, rehabilitation of shanty dwellings) and the variety of users (differing socio-economic dimensions). Decisionmaking, design, building and usage processes of the selected projects and actors of these processes have been investigated in the context of social sustainability. The cases include: New Gourna Village by Hassan Fathy, Quinta Monroy dwelling units conducted in Chile by Alejandro Aravena and Beyköy and Beriköy projects in Turkey aiming to solve the problem of housing which have appeared after the earthquake happened in 1999 have been investigated. Results of the study possible links between social sustainability indicators and user participation and links between user participation and the peculiarities of place. Results are compared and discussed in order to find possible solutions to form social sustainability through user participation. Results show that social sustainability issues depend on communities' characteristics, socio-economic conditions and user profile but user participation has positive effects on some social sustainability indicators like user satisfaction, a sense of belonging and social stability.

Keywords: Housing projects, Residential areas, Social sustainability, User participation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2797
49 A New Method for Extracting Ocean Wave Energy Utilizing the Wave Shoaling Phenomenon

Authors: Shafiq R. Qureshi, Syed Noman Danish, Muhammad Saeed Khalid

Abstract:

Fossil fuels are the major source to meet the world energy requirements but its rapidly diminishing rate and adverse effects on our ecological system are of major concern. Renewable energy utilization is the need of time to meet the future challenges. Ocean energy is the one of these promising energy resources. Threefourths of the earth-s surface is covered by the oceans. This enormous energy resource is contained in the oceans- waters, the air above the oceans, and the land beneath them. The renewable energy source of ocean mainly is contained in waves, ocean current and offshore solar energy. Very fewer efforts have been made to harness this reliable and predictable resource. Harnessing of ocean energy needs detail knowledge of underlying mathematical governing equation and their analysis. With the advent of extra ordinary computational resources it is now possible to predict the wave climatology in lab simulation. Several techniques have been developed mostly stem from numerical analysis of Navier Stokes equations. This paper presents a brief over view of such mathematical model and tools to understand and analyze the wave climatology. Models of 1st, 2nd and 3rd generations have been developed to estimate the wave characteristics to assess the power potential. A brief overview of available wave energy technologies is also given. A novel concept of on-shore wave energy extraction method is also presented at the end. The concept is based upon total energy conservation, where energy of wave is transferred to the flexible converter to increase its kinetic energy. Squeezing action by the external pressure on the converter body results in increase velocities at discharge section. High velocity head then can be used for energy storage or for direct utility of power generation. This converter utilizes the both potential and kinetic energy of the waves and designed for on-shore or near-shore application. Increased wave height at the shore due to shoaling effects increases the potential energy of the waves which is converted to renewable energy. This approach will result in economic wave energy converter due to near shore installation and more dense waves due to shoaling. Method will be more efficient because of tapping both potential and kinetic energy of the waves.

Keywords: Energy Utilizing, Wave Shoaling Phenomenon

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2676
48 The Impact of Information and Communication Technology in Education: Opportunities and Challenges

Authors: M. Nadeem, S. Nasir, K. A. Moazzam, R. Kashif

Abstract:

The remarkable growth and evolution in information and communication technology (ICT) in the past few decades has transformed modern society in almost every aspect of life. The impact and application of ICT have been observed in almost all walks of life including science, arts, business, health, management, engineering, sports, and education. ICT in education is being used extensively for student learning, creativity, interaction, and knowledge sharing and as a valuable source of teaching instrument. Apart from the student’s perspective, it plays a vital role for teacher education, instructional methods and curriculum development. There is a significant difference in growth of ICT enabled education in developing countries compared to developed nations and according to research, this gap is widening. ICT gradually infiltrate in almost every aspect of life. It has a deep and profound impact on our social, economic, health, environment, development, work, learning, and education environments. ICT provides very effective and dominant tools for information and knowledge processing. It is firmly believed that the coming generation should be proficient and confident in the use of ICT to cope with the existing international standards. This is only possible if schools can provide basic ICT infrastructure to students and to develop an ICT-integrated curriculum which covers all aspects of learning and creativity in students. However, there is a digital divide and steps must be taken to reduce this digital divide considerably to have the profound impact of ICT in education all around the globe. This study is based on theoretical approach and an extensive literature review is being conducted to see the successful implementations of ICT integration in education and to identify technologies and models which have been used in education in developed countries. This paper deals with the modern applications of ICT in schools for both teachers and students to uplift the learning and creativity amongst the students. A brief history of technology in education is presented and discussed are some important ICT tools for both student and teacher’s perspective. Basic ICT-based infrastructure for academic institutions is presented. The overall conclusion leads to the positive impact of ICT in education by providing an interactive, collaborative and challenging environment to students and teachers for knowledge sharing, learning and critical thinking.

Keywords: Information and communication technology, ICT, education, ICT infrastructure, teacher education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3897
47 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: Energy-efficient buildings, Hierarchical model predictive control, Microgrid power flow optimization, Price-optimal building climate control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524
46 Per Flow Packet Scheduling Scheme to Improve the End-to-End Fairness in Mobile Ad Hoc Wireless Network

Authors: K. Sasikala, R. S. D Wahidabanu

Abstract:

Various fairness models and criteria proposed by academia and industries for wired networks can be applied for ad hoc wireless network. The end-to-end fairness in an ad hoc wireless network is a challenging task compared to wired networks, which has not been addressed effectively. Most of the traffic in an ad hoc network are transport layer flows and thus the fairness of transport layer flows has attracted the interest of the researchers. The factors such as MAC protocol, routing protocol, the length of a route, buffer size, active queue management algorithm and the congestion control algorithms affects the fairness of transport layer flows. In this paper, we have considered the rate of data transmission, the queue management and packet scheduling technique. The ad hoc network is dynamic in nature due to various parameters such as transmission of control packets, multihop nature of forwarding packets, changes in source and destination nodes, changes in the routing path influences determining throughput and fairness among the concurrent flows. In addition, the effect of interaction between the protocol in the data link and transport layers has also plays a role in determining the rate of the data transmission. We maintain queue for each flow and the delay information of each flow is maintained accordingly. The pre-processing of flow is done up to the network layer only. The source and destination address information is used for separating the flow and the transport layer information is not used. This minimizes the delay in the network. Each flow is attached to a timer and is updated dynamically. Finite State Machine (FSM) is proposed for queue and transmission control mechanism. The performance of the proposed approach is evaluated in ns-2 simulation environment. The throughput and fairness based on mobility for different flows used as performance metrics. We have compared the performance of the proposed approach with ATP and the transport layer information is used. This minimizes the delay in the network. Each flow is attached to a timer and is updated dynamically. Finite State Machine (FSM) is proposed for queue and transmission control mechanism. The performance of the proposed approach is evaluated in ns-2 simulation environment. The throughput and fairness based on not mobility for different flows used as performance metrics. We have compared the performance of the proposed approach with ATP and MC-MLAS and the performance of the proposed approach is encouraging.

Keywords: ATP, End-to-End fairness, FSM, MAC, QoS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1987
45 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members

Authors: Sami W. Tabsh

Abstract:

The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.

Keywords: Code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2588
44 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: Compliance Course, Corporate Training, Learner Behaviours, xAPI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 564
43 Gassing Tendency of Natural Ester Based Transformer Oils: Low Ethane Generation in Stray Gassing Behavior

Authors: Banti Sidhiwala, T. C. S. M. Gupta

Abstract:

Mineral oils of naphthenic and paraffinic type are in use as insulating liquids in the transformer applications to protect solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different types of gases can represent various types of faults that may occur due to faulty components or unfavorable operating conditions. A large amount of database has been generated in the industry for dissolved gas analysis in mineral oil-based transformer oils, and various models have been developed to predict faults and analyze data. Additionally, oil specifications and standards have been updated to include stray gassing limits that cover low-temperature faults. This modification has become an effective preventative maintenance tool that can help greatly in understanding the reasons for breakdowns of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the stray gassing test shows that hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these types of esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a class of natural esters with low levels of stray gassing by American Society for Testing and Materials (ASTM) and International Electric Council (IEC) methods much lower values compared to the natural ester-based products reported in the literature. The experimental results of products are explained.

Keywords: Biodegradability, fire point, dissolved gas analysis, natural ester, stray gassing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 200
42 A Frame Work for the Development of a Suitable Method to Find Shoot Length at Maturity of Mustard Plant Using Soft Computing Model

Authors: Satyendra Nath Mandal, J. Pal Choudhury, Dilip De, S. R. Bhadra Chaudhuri

Abstract:

The production of a plant can be measured in terms of seeds. The generation of seeds plays a critical role in our social and daily life. The fruit production which generates seeds, depends on the various parameters of the plant, such as shoot length, leaf number, root length, root number, etc When the plant is growing, some leaves may be lost and some new leaves may appear. It is very difficult to use the number of leaves of the tree to calculate the growth of the plant.. It is also cumbersome to measure the number of roots and length of growth of root in several time instances continuously after certain initial period of time, because roots grow deeper and deeper under ground in course of time. On the contrary, the shoot length of the tree grows in course of time which can be measured in different time instances. So the growth of the plant can be measured using the data of shoot length which are measured at different time instances after plantation. The environmental parameters like temperature, rain fall, humidity and pollution are also play some role in production of yield. The soil, crop and distance management are taken care to produce maximum amount of yields of plant. The data of the growth of shoot length of some mustard plant at the initial stage (7,14,21 & 28 days after plantation) is available from the statistical survey by a group of scientists under the supervision of Prof. Dilip De. In this paper, initial shoot length of Ken( one type of mustard plant) has been used as an initial data. The statistical models, the methods of fuzzy logic and neural network have been tested on this mustard plant and based on error analysis (calculation of average error) that model with minimum error has been selected and can be used for the assessment of shoot length at maturity. Finally, all these methods have been tested with other type of mustard plants and the particular soft computing model with the minimum error of all types has been selected for calculating the predicted data of growth of shoot length. The shoot length at the stage of maturity of all types of mustard plants has been calculated using the statistical method on the predicted data of shoot length.

Keywords: Fuzzy time series, neural network, forecasting error, average error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
41 Low Energy Technology for Leachate Valorisation

Authors: Jesús M. Martín, Francisco Corona, Dolores Hidalgo

Abstract:

Landfills present long-term threats to soil, air, groundwater and surface water due to the formation of greenhouse gases (methane gas and carbon dioxide) and leachate from decomposing garbage. The composition of leachate differs from site to site and also within the landfill. The leachates alter with time (from weeks to years) since the landfilled waste is biologically highly active and their composition varies. Mainly, the composition of the leachate depends on factors such as characteristics of the waste, the moisture content, climatic conditions, degree of compaction and the age of the landfill. Therefore, the leachate composition cannot be generalized and the traditional treatment models should be adapted in each case. Although leachate composition is highly variable, what different leachates have in common is hazardous constituents and their potential eco-toxicological effects on human health and on terrestrial ecosystems. Since leachate has distinct compositions, each landfill or dumping site would represent a different type of risk on its environment. Nevertheless, leachates consist always of high organic concentration, conductivity, heavy metals and ammonia nitrogen. Leachate could affect the current and future quality of water bodies due to uncontrolled infiltrations. Therefore, control and treatment of leachate is one of the biggest issues in urban solid waste treatment plants and landfills design and management. This work presents a treatment model that will be carried out "in-situ" using a cost-effective novel technology that combines solar evaporation/condensation plus forward osmosis. The plant is powered by renewable energies (solar energy, biomass and residual heat), which will minimize the carbon footprint of the process. The final effluent quality is very high, allowing reuse (preferred) or discharge into watercourses. In the particular case of this work, the final effluents will be reused for cleaning and gardening purposes. A minority semi-solid residual stream is also generated in the process. Due to its special composition (rich in metals and inorganic elements), this stream will be valorized in ceramic industries to improve the final products characteristics.

Keywords: Forward osmosis, landfills, leachate valorization, solar evaporation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
40 Exploring the Applicability of a Rapid Health Assessment in India

Authors: Claudia Carbajal, Jija Dutt, Smriti Pahwa, Sumukhi Vaid, Karishma Vats

Abstract:

ASER Centre, the research and assessment arm of Pratham Education Foundation sees measurement as the first stage of action. ASER uses primary research to push and give empirical foundations to policy discussions at a multitude of levels. At a household level, common citizens use a simple assessment (a floor-level test) to measure learning across rural India. This paper presents the evidence on the applicability of an ASER approach to the health sector. A citizen-led assessment was designed and executed that collected information from young mothers with children up to a year of age. The pilot assessments were rolled-out in two different models: Paid surveyors and student volunteers. The survey covered three geographic areas: 1,239 children in the Jaipur District of Rajasthan, 2,086 in the Rae Bareli District of Uttar Pradesh, and 593 children in the Bhuj Block in Gujarat. The survey tool was designed to study knowledge of health-related issues, daily practices followed by young mothers and access to relevant services and programs. It provides insights on behaviors related to infant and young child feeding practices, child and maternal nutrition and supplementation, water and sanitation, and health services. Moreover, the survey studies the reasons behind behaviors giving policy-makers actionable pathways to improve implementation of social sector programs. Although data on health outcomes are available, this approach could provide a rapid annual assessment of health issues with indicators that are easy to understand and act upon so that measurements do not become an exclusive domain of experts. The results give many insights into early childhood health behaviors and challenges. Around 98% of children are breastfed, and approximately half are not exclusively breastfed (for the first 6 months). Government established diet diversity guidelines are met for less than 1 out of 10 children. Although most households are satisfied with the quality of drinking water, most tested households had contaminated water.

Keywords: Citizen-led assessment, infant and young children feeding, maternal nutrition, rapid health assessment supplementation, water and sanitation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1806
39 Numerical Evaluation of Lateral Bearing Capacity of Piles in Cement-Treated Soils

Authors: Reza Ziaie Moayed, Saeideh Mohammadi

Abstract:

Soft soil is used in many of civil engineering projects like coastal, marine and road projects. Because of low shear strength and stiffness of soft soils, large settlement and low bearing capacity will occur under superstructure loads. This will make the civil engineering activities more difficult and costlier. In the case of soft soils, improvement is a suitable method to increase the shear strength and stiffness for engineering purposes. In recent years, the artificial cementation of soil by cement and lime has been extensively used for soft soil improvement. Cement stabilization is a well-established technique for improving soft soils. Artificial cementation increases the shear strength and hardness of the natural soils. On the other hand, in soft soils, the use of piles to transfer loads to the depths of ground is usual. By using cement treated soil around the piles, high bearing capacity and low settlement in piles can be achieved. In the present study, lateral bearing capacity of short piles in cemented soils is investigated by numerical approach. For this purpose, three dimensional (3D) finite difference software, FLAC 3D is used. Cement treated soil has a strain hardening-softening behavior, because of breaking of bonds between cement agent and soil particle. To simulate such behavior, strain hardening-softening soil constitutive model is used for cement treated soft soil. Additionally, conventional elastic-plastic Mohr Coulomb constitutive model and linear elastic model are used for stress-strain behavior of natural soils and pile. To determine the parameters of constitutive models and also for verification of numerical model, the results of available triaxial laboratory tests on and insitu loading of piles in cement treated soft soil are used. Different parameters are considered in parametric study to determine the effective parameters on the bearing of the piles on cemented treated soils. In the present paper, the effect of various length and height of the artificial cemented area, different diameter and length of the pile and the properties of the materials are studied. Also, the effect of choosing a constitutive model for cemented treated soils in the bearing capacity of the pile is investigated.

Keywords: Cement-treated soils, pile, lateral capacity, FLAC 3D.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
38 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Geryes Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g. Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple-views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: Smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 275
37 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: Metagenomics, phenotype prediction, deep learning, embeddings, multiple instance learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922