Search results for: real time kernel preemption
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20901

Search results for: real time kernel preemption

17811 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth

Authors: Hemant Upadhyay, Tarun Kumar Kundu

Abstract:

It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.

Keywords: blast furnace, hearth, deadman, hotmetal

Procedia PDF Downloads 184
17810 Damping and Stability Evaluation for the Dynamical Hunting Motion of the Bullet Train Wheel Axle Equipped with Cylindrical Wheel Treads

Authors: Barenten Suciu

Abstract:

Classical matrix calculus and Routh-Hurwitz stability conditions, applied to the snake-like motion of the conical wheel axle, lead to the conclusion that the hunting mode is inherently unstable, and its natural frequency is a complex number. In order to analytically solve such a complicated vibration model, either the inertia terms were neglected, in the model designated as geometrical, or restrictions on the creep coefficients and yawing diameter were imposed, in the so-called dynamical model. Here, an alternative solution is proposed to solve the hunting mode, based on the observation that the bullet train wheel axle is equipped with cylindrical wheels. One argues that for such wheel treads, the geometrical hunting is irrelevant, since its natural frequency becomes nil, but the dynamical hunting is significant since its natural frequency reduces to a real number. Moreover, one illustrates that the geometrical simplification of the wheel causes the stabilization of the hunting mode, since the characteristic quartic equation, derived for conical wheels, reduces to a quadratic equation of positive coefficients, for cylindrical wheels. Quite simple analytical expressions for the damping ratio and natural frequency are obtained, without applying restrictions into the model of contact. Graphs of the time-depending hunting lateral perturbation, including the maximal and inflexion points, are presented both for the critically-damped and the over-damped wheel axles.

Keywords: bullet train, creep, cylindrical wheels, damping, dynamical hunting, stability, vibration analysis

Procedia PDF Downloads 153
17809 Data Analysis Tool for Predicting Water Scarcity in Industry

Authors: Tassadit Issaadi Hamitouche, Nicolas Gillard, Jean Petit, Valerie Lavaste, Celine Mayousse

Abstract:

Water is a fundamental resource for the industry. It is taken from the environment either from municipal distribution networks or from various natural water sources such as the sea, ocean, rivers, aquifers, etc. Once used, water is discharged into the environment, reprocessed at the plant or treatment plants. These withdrawals and discharges have a direct impact on natural water resources. These impacts can apply to the quantity of water available, the quality of the water used, or to impacts that are more complex to measure and less direct, such as the health of the population downstream from the watercourse, for example. Based on the analysis of data (meteorological, river characteristics, physicochemical substances), we wish to predict water stress episodes and anticipate prefectoral decrees, which can impact the performance of plants and propose improvement solutions, help industrialists in their choice of location for a new plant, visualize possible interactions between companies to optimize exchanges and encourage the pooling of water treatment solutions, and set up circular economies around the issue of water. The development of a system for the collection, processing, and use of data related to water resources requires the functional constraints specific to the latter to be made explicit. Thus the system will have to be able to store a large amount of data from sensors (which is the main type of data in plants and their environment). In addition, manufacturers need to have 'near-real-time' processing of information in order to be able to make the best decisions (to be rapidly notified of an event that would have a significant impact on water resources). Finally, the visualization of data must be adapted to its temporal and geographical dimensions. In this study, we set up an infrastructure centered on the TICK application stack (for Telegraf, InfluxDB, Chronograf, and Kapacitor), which is a set of loosely coupled but tightly integrated open source projects designed to manage huge amounts of time-stamped information. The software architecture is coupled with the cross-industry standard process for data mining (CRISP-DM) data mining methodology. The robust architecture and the methodology used have demonstrated their effectiveness on the study case of learning the level of a river with a 7-day horizon. The management of water and the activities within the plants -which depend on this resource- should be considerably improved thanks, on the one hand, to the learning that allows the anticipation of periods of water stress, and on the other hand, to the information system that is able to warn decision-makers with alerts created from the formalization of prefectoral decrees.

Keywords: data mining, industry, machine Learning, shortage, water resources

Procedia PDF Downloads 121
17808 The Complexities of Designing a Learning Programme in Higher Education with the End-User in Mind

Authors: Andre Bechuke

Abstract:

The quality of every learning programme in Higher Education (HE) is dependent on the planning, design, and development of the curriculum decisions. These curriculum development decisions are highly influenced by the knowledge of the end-user, who are not always just the students. When curriculum experts plan, design and develop learning programmes, they always have the end-users in mind throughout the process. Without proper knowledge of the end-user(s), the design and development of a learning programme might be flawed. Curriculum experts often struggle to determine who the real end-user is. As such, it is even more challenging to establish what needs to be known about the end user that should inform the plan, design, and development of a learning programme. This research sought suggest approaches to guide curriculum experts to identify the end-user(s), taking into consideration the pressure and influence other agencies and structures or stakeholders (industry, students, government, universities context, lecturers, international communities, professional regulatory bodies) have on the design of a learning programme and the graduates of the programmes. Considering the influence of these stakeholders, which is also very important, the task of deciding who the real end-user of the learning programme becomes very challenging. This study makes use of criteria 1 and 18 of the Council on Higher Education criteria for programme accreditation to guide the process of identifying the end-users when developing a learning programme. Criterion 1 suggests that designers must ensure that the programme is consonant with the institution’s mission, forms part of institutional planning and resource allocation, meets national requirements and the needs of students and other stakeholders, and is intellectually credible. According to criterion 18, in designing a learning programme, steps must be taken to enhance the employability of students and alleviate shortages of expertise in relevant fields. In conclusion, there is hardly ever one group of end-users to be considered for developing a learning programme, and the notion that students are the end-users is not true, especially when the graduates are unable to use the qualification for employment.

Keywords: council on higher education, curriculum design and development, higher education, learning programme

Procedia PDF Downloads 81
17807 Exploratory Data Analysis of Passenger Movement on Delhi Urban Bus Route

Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain

Abstract:

Intelligent Transportation System is an integrated application of communication, control and monitoring and display process technologies for developing a user–friendly transportation system for urban areas in developing countries. In fact, the development of a country and the progress of its transportation system are complementary to each other. Urban traffic has been growing vigorously due to population growth as well as escalation of vehicle ownership causing congestion, delays, pollution, accidents, high-energy consumption and low productivity of resources. The development and management of urban transport in developing countries like India however, is at tryout stage with very few accumulations. Under the umbrella of ITS, urban corridor management strategy have proven to be one of the most successful system in accomplishing these objectives. The present study interprets and figures out the performance of the 27.4 km long Urban Bus route having six intersections, five flyovers and 29 bus stops that covers significant area of the city by causality analysis. Performance interpretations incorporate Passenger Boarding and Alighting, Dwell time, Distance between Bus Stops and Total trip time taken by bus on selected urban route.

Keywords: congestion, dwell time, passengers boarding alighting, travel time

Procedia PDF Downloads 336
17806 Experimental Study of Particle Deposition on Leading Edge of Turbine Blade

Authors: Yang Xiao-Jun, Yu Tian-Hao, Hu Ying-Qi

Abstract:

Breathing in foreign objects during the operation of the aircraft engine, impurities in the aircraft fuel and products of incomplete combustion can produce deposits on the surface of the turbine blades. These deposits reduce not only the turbine's operating efficiency but also the life of the turbine blades. Based on the small open wind tunnel, the simulation of deposits on the leading edge of the turbine has been carried out in this work. The effect of film cooling on particulate deposition was investigated. Based on the analysis, the adhesive mechanism for the molten pollutants’ reaching to the turbine surface was simulated by matching the Stokes number, TSP (a dimensionless number characterizing particle phase transition) and Biot number of the test facility and that of the real engine. The thickness distribution and growth trend of the deposits have been observed by high power microscope and infrared camera under different temperature of the main flow, the solidification temperature of the particulate objects, and the blowing ratio. The experimental results from the leading edge particulate deposition demonstrate that the thickness of the deposition increases with time until a quasi-stable thickness is reached, showing a striking effect of the blowing ratio on the deposition. Under different blowing ratios, there exists a large difference in the thickness distribution of the deposition, and the deposition is minimal at the specific blow ratio. In addition, the temperature of main flow and the solidification temperature of the particulate have a great influence on the deposition.

Keywords: deposition, experiment, film cooling, leading edge, paraffin particles

Procedia PDF Downloads 146
17805 Seismic Behavior of a Jumbo Container Crane in the Low Seismicity Zone Using Time-History Analyses

Authors: Huy Q. Tran, Bac V. Nguyen, Choonghyun Kang, Jungwon Huh

Abstract:

Jumbo container crane is an important part of port structures that needs to be designed properly, even when the port locates in low seismicity zone such as in Korea. In this paper, 30 artificial ground motions derived from the elastic response spectra of Korean Building Code (2005) are used for time history analysis. It is found that the uplift might not occur in this analysis when the crane locates in the low seismic zone. Therefore, a selection of a pinned or a gap element for base supporting has not much effect on the determination of the total base shear. The relationships between the total base shear and peak ground acceleration (PGA) and the relationships between the portal drift and the PGA are proposed in this study.

Keywords: jumbo container crane, portal drift, time history analysis, total base shear

Procedia PDF Downloads 189
17804 Introduction of Digital Radiology to Improve the Timeliness in Availability of Radiological Diagnostic Images for Trauma Care

Authors: Anuruddha Jagoda, Samiddhi Samarakoon, Anil Jasinghe

Abstract:

In an emergency department ‘where every second count for patient’s management’ timely availability of X- rays play a vital role in early diagnosis and management of patients. Trauma care centers rely heavily on timely radiologic imaging for patient care and radiology plays a crucial role in the emergency department (ED) operations. A research study was carried out to assess timeliness of availability of X-rays and total turnaround time at the Accident Service of National Hospital of Sri Lanka which is the premier trauma center in the country. Digital Radiology system was implemented as an intervention to improve the timeliness of availability of X-rays. Post-implementation assessment was carried out to assess the effectiveness of the intervention. Reduction in all three aspects of waiting times namely waiting for initial examination by doctors, waiting until X –ray is performed and waiting for image availability was observed after implementation of the intervention. However, the most significant improvement was seen in waiting time for image availability and reduction in time for image availability had indirect impact on reducing waiting time for initial examination by doctors and waiting until X –ray is performed. The most significant reduction in time for image availability was observed when performing 4-5 X rays with DR system. The least improvement in timeliness was seen in patients who are categorized as critical.

Keywords: emergency department, digital radilogy, timeliness, trauma care

Procedia PDF Downloads 265
17803 An Interactive Voice Response Storytelling Model for Learning Entrepreneurial Mindsets in Media Dark Zones

Authors: Vineesh Amin, Ananya Agrawal

Abstract:

In a prolonged period of uncertainty and disruptions in the pre-said normal order, non-cognitive skills, especially entrepreneurial mindsets, have become a pillar that can reform the educational models to inform the economy. Dreamverse Learning Lab’s IVR-based storytelling program -Call-a-Kahaani- is an evolving experiment with an aim to kindle entrepreneurial mindsets in the remotest locations of India in an accessible and engaging manner. At the heart of this experiment is the belief that at every phase in our life’s story, we have a choice which brings us closer to achieving our true potential. This interactive program is thus designed using real-time storytelling principles to empower learners, ages 24 and below, to make choices and take decisions as they become more self-aware, practice grit, try new things through stories, guided activities, and interactions, simply over a phone call. This research paper highlights the framework behind an ongoing scalable, data-oriented, low-tech program to kindle entrepreneurial mindsets in media dark zones supported by iterative design and prototyping to reach 13700+ unique learners who made 59000+ calls for 183900+min listening duration to listen to content pieces of around 3 to 4 min, with the last monitored (March 2022) record of 34% serious listenership, within one and a half years of its inception. The paper provides an in-depth account of the technical development, content creation, learning, and assessment frameworks, as well as mobilization models which have been leveraged to build this end-to-end system.

Keywords: non-cognitive skills, entrepreneurial mindsets, speech interface, remote learning, storytelling

Procedia PDF Downloads 209
17802 Calcium Phosphate Cement/Gypsum Composite as Dental Pulp Capping

Authors: Jung-Feng Lin, Wei-Tang Chen, Chung-King Hsu, Chun-Pin Lin, Feng-Huei Lin

Abstract:

One of the objectives of operative dentistry is to maintain pulp health in compromised teeth. Mostly used methods for this purpose are direct pulp capping and pulpotomy, which consist of placement of biocompatible materials and bio-inductors on the exposed pulp tissue to preserve its health and stimulate repair by mineralized tissue formation. In this study, we developed a material (calcium phosphate cement (CPC)/gypsum composite) as the dental pulp capping material for shortening setting time and improving handling properties. We further discussed the influence of five different ratio of gypsum to CPC on HAP conversion, microstructure, setting time, weight loss, pH value, temperature difference, viscosity, mechanical properties, porosity, and biocompatibility.

Keywords: calcium phosphate cement, calcium sulphate hemihydrate, pulp capping, fast setting time

Procedia PDF Downloads 386
17801 Understanding the Excited State Dynamics of a Phase Transformable Photo-Active Metal-Organic Framework MIP 177 through Time-Resolved Infrared Spectroscopy

Authors: Aneek Kuila, Yaron Paz

Abstract:

MIP 177 LT and HT are two-phase transformable metal organic frameworks consisting of a Ti12O15 oxocluster and a tetracarboxylate ligand that exhibits robust chemical stability and improved photoactivity. LT to HT only shows the changes in dimensionality from 0D to 1D without any change in the overall chemical structure. In terms of chemical and photoactivity MIP 177 LT is found to perform better than the MIP 177HT. Step-scan Fourier transform absorption difference time-resolved spectroscopy has been used to collect mid-IR time-resolved infrared spectra of the transient electronic excited states of a nano-porous metal–organic framework MIP 177-LT and HT with 2.5 ns time resolution. Analyzing the time-resolved vibrational data after 355nm LASER excitation reveals the presence of the temporal changes of ν (O-Ti-O) of Ti-O metal cluster and ν (-COO) of the ligand concluding the fact that these moieties are the ultimate acceptors of the excited charges which are localized over those regions on the nanosecond timescale. A direct negative correlation between the differential absorbance (Δ Absorbance) reveals the charge transfer relation among these two moieties. A longer-lived transient signal up to 180ns for MIP 177 LT compared to the 100 ns of MIP 177 HT shows the extended lifetime of the reactive charges over the surface that exerts in their effectivity. An ultrafast change of bidentate to monodentate bridging in the -COO-Ti-O ligand-metal coordination environment was observed after the photoexcitation of MIP 177 LT which remains and lives with for seconds after photoexcitation is halted. This phenomenon is very unique to MIP 177 LT but not observed with HT. This in-situ change in the coordination denticity during the photoexcitation was not observed previously which can rationalize the reason behind the ability of MIP 177 LT to accumulate electrons during continuous photoexcitation leading to a superior photocatalytic activity.

Keywords: time resolved FTIR, metal organic framework, denticity, photoacatalysis

Procedia PDF Downloads 59
17800 High Aspect Ratio Micropillar Array Based Microfluidic Viscometer

Authors: Ahmet Erten, Adil Mustafa, Ayşenur Eser, Özlem Yalçın

Abstract:

We present a new viscometer based on a microfluidic chip with elastic high aspect ratio micropillar arrays. The displacement of pillar tips in flow direction can be used to analyze viscosity of liquid. In our work, Computational Fluid Dynamics (CFD) is used to analyze pillar displacement of various micropillar array configurations in flow direction at different viscosities. Following CFD optimization, micro-CNC based rapid prototyping is used to fabricate molds for microfluidic chips. Microfluidic chips are fabricated out of polydimethylsiloxane (PDMS) using soft lithography methods with molds machined out of aluminum. Tip displacements of micropillar array (300 µm in diameter and 1400 µm in height) in flow direction are recorded using a microscope mounted camera, and the displacements are analyzed using image processing with an algorithm written in MATLAB. Experiments are performed with water-glycerol solutions mixed at 4 different ratios to attain 1 cP, 5 cP, 10 cP and 15 cP viscosities at room temperature. The prepared solutions are injected into the microfluidic chips using a syringe pump at flow rates from 10-100 mL / hr and the displacement versus flow rate is plotted for different viscosities. A displacement of around 1.5 µm was observed for 15 cP solution at 60 mL / hr while only a 1 µm displacement was observed for 10 cP solution. The presented viscometer design optimization is still in progress for better sensitivity and accuracy. Our microfluidic viscometer platform has potential for tailor made microfluidic chips to enable real time observation and control of viscosity changes in biological or chemical reactions.

Keywords: Computational Fluid Dynamics (CFD), high aspect ratio, micropillar array, viscometer

Procedia PDF Downloads 247
17799 Strategic Development of Urban Environmental Management Base on Good Governance - Case study of (Waste Management of Tehran)

Authors: A. Farhad Sadri, B. Ali Farhadi, C. Nasim Shalamzari

Abstract:

Waste management is a principle of urban and environmental governance. Waste management in Tehran metropolitan requires good strategies for better governance. Using of good urban governance principles together with eight main indexes can be an appropriate base for this aim. One of the reasonable tools in this field is usage of SWOT methods which provides possibility of comparing the opportunities, threats, weaknesses, and strengths by using IFE and EFE matrixes. The results of the above matrixes, respectively 2.533 and 2.403, show that management system of Tehran metropolitan wastes has performed weak regarding to internal factors and has not have good performance regarding using the opportunities and dealing with threats. In this research, prioritizing and describing the real value of each 24 strategies in waste management in Tehran metropolitan have been surveyed considering good governance derived from Quantitative Strategic Planning Management (QSPM) by using Kolomogrof-Smirnoff by 1.549 and significance level of 0.073 in order to define normalization of final values and all of the strategies utilities and Variance Analysis of ANOVA has been calculated for all SWOT strategies. Duncan’s test results regarding four WT, ST, WO, and SO strategies show no significant difference. In addition to mean comparison by Duncan method in this research, LSD (Lowest Significant Difference test) has been used by probability of 5% and finally, 7 strategies and final model of Tehran metropolitan waste management strategy have been defined. Increasing the confidence of people with transparency of budget, developing and improving the legal structure (rule-oriented and law governance, more responsibility about requirements of private sectors, increasing recycling rates and real effective participation of people and NGOs to improve waste management (contribution) and etc, are main available strategies which have been achieved based on good urban governance management principles.

Keywords: waste, strategy, environmental management, urban good governance, SWOT

Procedia PDF Downloads 321
17798 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 255
17797 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities

Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho

Abstract:

In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.

Keywords: on-site stormwater detention, source control, stormwater, urban drainage

Procedia PDF Downloads 180
17796 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 281
17795 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 176
17794 Navigating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management

Authors: Derrick Mirindi, Frederic Mirindi, Oluwakemi Oshineye

Abstract:

The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.

Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software

Procedia PDF Downloads 83
17793 Trauma in the Unconsoled: A Crisis of the Self

Authors: Assil Ghariri

Abstract:

This article studies the process of rewriting the self through memory in Kazuo Ishiguro’s novel, the Unconsoled (1995). It deals with the journey that the protagonist Mr. Ryder takes through the unconscious, in search for his real self, in which trauma stands as an obstacle. The article uses Carl Jung’s theory of archetypes. Trauma, in this article, is discussed as one of the true obstacles of the unconscious that prevent people from realizing the truth about their selves.

Keywords: Carl Jung, Kazuo Ishiguro, memory, trauma

Procedia PDF Downloads 402
17792 Effect of Synthesis Parameters on Crystal Size and Perfection of Mordenite and Analcime

Authors: Zehui Du, Chaiwat Prapainainar, Paisan Kongkachuichay, Paweena Prapainainar

Abstract:

The aim of this work was to obtain small crystalline size and high crystallinity of mordenites and analcimes, by modifying the aging time, agitation, water content, crystallization temperature and crystallization time. Two different hydrothermal methods were studied. Both methods used Na2SiO3 as the silica source, NaAlO2 as the aluminum source, and NaOH as the alkali source. The first method used HMI as the template while the second method did not use the template. Mordenite crystals with spherical shape and bimodal in size of about 1 and 5 µm were obtained from the first method using conditions of 24 hr aging time, 170°C and 24 hr crystallization. Modernites with high crystallinity were formed using agitation system in the crystallization process. It was also found that the aging time of 2 hr and 24 hr did not much affect the formation of mordenite crystals. Analcime crystals were formed in spherical shape and facet on surface with the size between 13-15 µm by the second method using the conditions of 30 minutes aging time, 170°C and 24 hr crystallization without calcination. By increasing water content, the crystallization process was slowed down and resulted in smaller analcime crystals. Larger size of analcime crystals were observed when the samples were calcined at 300°C and 580°C. Higher calcination temperature led to higher crystal growth and resulted in larger crystal size. Finally, mordenite and analcime was used as fillers in zeolite/Nafion composite membrane to solve the fuel cross over problem in direct alcohol fuel cell.

Keywords: analcime, hydrothermal synthesis, mordenite, zeolite

Procedia PDF Downloads 264
17791 Characterization of Nano Coefficient of Friction through Lfm of Superhydrophobic/Oleophobic Coatings Applied on 316l Ss

Authors: Hamza Shams, Sajid Saleem, Bilal A. Siddiqui

Abstract:

This paper investigates the coefficient of friction at nano-levels of commercially available superhydrophobic/oleophobic coatings when applied over 316L SS. 316L Stainless Steel or Marine Stainless Steel has been selected for its widespread uses in structures, marine and biomedical applications. The coatings were investigated in harsh sand-storm and sea water environments. The particle size of the sand during the procedure was carefully selected to simulate sand-storm conditions. Sand speed during the procedure was carefully modulated to simulate actual wind speed during a sand-storm. Sample preparation was carried out using prescribed methodology by the coating manufacturer. The coating’s adhesion and thickness was verified before and after the experiment with the use of Scanning Electron Microscopy (SEM). The value for nano-level coefficient of friction has been determined using Lateral Force Microscopy (LFM). The analysis has been used to formulate a value of friction coefficient which in turn is associative of the amount of wear the coating can bear before the exposure of the base substrate to the harsh environment. The analysis aims to validate the coefficient of friction value as marketed by the coating manufacturers and more importantly test the coating in real-life applications to justify its use. It is expected that the coating would resist exposure to the harsh environment for a considerable amount of time. Further, it would prevent the sample from getting corroded in the process.

Keywords: 316L SS, scanning electron microscopy, lateral force microscopy, marine stainless steel, oleophobic coating, superhydrophobic coating

Procedia PDF Downloads 486
17790 Effects of Residence Time on Selective Absorption of Hydrogen Suphide

Authors: Dara Satyadileep, Abdallah S. Berrouk

Abstract:

Selective absorption of Hydrogen Sulphide (H2S) using methyldiethanol amine (MDEA) has become a point of interest as means of minimizing capital and operating costs of gas sweetening plants. This paper discusses the prominence of optimum design of column internals to best achieve H2S selectivity using MDEA. To this end, a kinetics-based process simulation model has been developed for a commercial gas sweetening unit. Trends of sweet gas H2S & CO2 contents as function of fraction active area (and hence residence time) have been explained through analysis of interdependent heat and mass transfer phenomena. Guidelines for column internals design in order to achieve desired degree of H2S selectivity are provided. Also the effectiveness of various operating conditions in achieving H2S selectivity for an industrial absorber with fixed internals is investigated.

Keywords: gas sweetening, H2S selectivity, methyldiethanol amine, process simulation, residence time

Procedia PDF Downloads 344
17789 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis

Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman

Abstract:

Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.

Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness

Procedia PDF Downloads 86
17788 Changes in Textural Properties of Zucchini Slices with Deep-Fat-Frying

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Changes in textural properties of zucchini slices under effects of frying conditions were investigated. Frying time and temperature were interested process variables like slice thickness. Slice thickness was studied at three levels (2, 3, and 4 mm). Frying process was performed at two temperature levels (160 and 180 °C) and each for five different process time periods (1, 2, 3, 5, 8 and 10 min). As frying oil sunflower oil was used. Before frying zucchini slices were thermally processes in boiling water for 90 seconds to inactivate at least 80% of plant’s enzymes. After thermal process, zucchini slices were fried in an industrial fryer at specified temperature and time pairs. Fried slices were subjected to textural profile analysis (TPA) to determine textural properties. In this extent hardness, elasticity, cohesion, chewiness, firmness values of slices were figured out. Statistical analysis indicated significant variations in the studied textural properties with process conditions (p < 0.05). Hardness and firmness were determined for fresh and thermally processes zucchini slices to compare each others. Differences in hardness and firmness of fresh, thermally processed and fried slices were found to be significant (p < 0.05). This project (113R015) has been supported by TUBITAK.

Keywords: sunflower oil, hardness, firmness, slice thickness, frying temperature, frying time

Procedia PDF Downloads 444
17787 Geographic Legacies for Modern Day Disease Research: Autism Spectrum Disorder as a Case-Control Study

Authors: Rebecca Richards Steed, James Van Derslice, Ken Smith, Richard Medina, Amanda Bakian

Abstract:

Elucidating gene-environment interactions for heritable disease outcomes is an emerging area of disease research, with genetic studies informing hypotheses for environment and gene interactions underlying some of the most confounding diseases of our time, like autism spectrum disorder (ASD). Geography has thus far played a key role in identifying environmental factors contributing to disease, but its use can be broadened to include genetic and environmental factors that have a synergistic effect on disease. Through the use of family pedigrees and disease outcomes with life-course residential histories, space-time clustering of generations at critical developmental windows can provide further understanding of (1) environmental factors that contribute to disease patterns in families, (2) susceptible critical windows of development most impacted by environment, (3) and that are most likely to lead to an ASD diagnosis. This paper introduces a retrospective case-control study that utilizes pedigree data, health data, and residential life-course location points to find space-time clustering of ancestors with a grandchild/child with a clinical diagnosis of ASD. Finding space-time clusters of ancestors at critical developmental windows serves as a proxy for shared environmental exposures. The authors refer to geographic life-course exposures as geographic legacies. Identifying space-time clusters of ancestors creates a bridge for researching exposures of past generations that may impact modern-day progeny health. Results from the space-time cluster analysis show multiple clusters for the maternal and paternal pedigrees. The paternal grandparent pedigree resulted in the most space-time clustering for birth and childhood developmental windows. No statistically significant clustering was found for adolescent years. These results will be further studied to identify the specific share of space-time environmental exposures. In conclusion, this study has found significant space-time clusters of parents, and grandparents for both maternal and paternal lineage. These results will be used to identify what environmental exposures have been shared with family members at critical developmental windows of time, and additional analysis will be applied.

Keywords: family pedigree, environmental exposure, geographic legacy, medical geography, transgenerational inheritance

Procedia PDF Downloads 116
17786 Development and Evaluation of Gastro Retentive Floating Tablets of Ayurvedic Vati Formulation

Authors: Imran Khan Pathan, Anil Bhandari, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit

Abstract:

Floating tablets of Marichyadi Vati were developed with an aim to prolong its gastric residence time and increase the bioavailability of drug. Rapid gastrointestinal transit could result in incomplete drug release from the drug delivery system above the absorption zone leading to diminished efficacy of the administered dose. The tablets were prepared by wet granulation technique, using HPMC E50 LV act as Matrixing agent, Carbopol as floating enhancer, microcrystalline cellulose as binder, sodium bi carbonate as effervescent agent with other excipients. The simplex lattice design was used for selection of variables for tablets formulation. Formulation was optimized on the basis of floating time and in vitro drug release. The results showed that the floating lag time for optimized formulation was found to be 61 second with about 97.32 % of total drug release within 3 hours. The in vitro release profiles of drug from the formulation could be best expressed zero order with highest linearity r2 = 0.9943. It was concluded that the gastroretentive drug delivery system can be developed for Marichyadi Vati containing piperine to increase the residence time of the drug in the stomach and thereby increasing bioavailability.

Keywords: piperine, Marichyadi Vati, gastroretentive drug delivery, floating tablet

Procedia PDF Downloads 457
17785 Attenuation Scale Calibration of an Optical Time Domain Reflectometer

Authors: Osama Terra, Hatem Hussein

Abstract:

Calibration of Optical Time Domain Reflectometer (OTDR) is crucial for the accurate determination of loss budget for long optical fiber links. In this paper, the calibration of the attenuation scale of an OTDR using two different techniques is discussed and implemented. The first technique is the external modulation method (EM). A setup is proposed to calibrate an OTDR over a dynamic range of around 15 dB based on the EM method. Afterwards, the OTDR is calibrated using two standard reference fibers (SRF). Both SRF are calibrated using cut-back technique; one of them is calibrated at our home institute (the National Institute of Standards – NIS) while the other at the National Physical Laboratory (NPL) of the United Kingdom to confirm our results. In addition, the parameters contributing the calibration uncertainty are thoroughly investigated. Although the EM method has several advantages over the SRF method, the uncertainties in the SRF method is found to surpass that of the EM method.

Keywords: optical time domain reflectometer, fiber attenuation measurement, OTDR calibration, external source method

Procedia PDF Downloads 465
17784 Fast Short-Term Electrical Load Forecasting under High Meteorological Variability with a Multiple Equation Time Series Approach

Authors: Charline David, Alexandre Blondin Massé, Arnaud Zinflou

Abstract:

In 2016, Clements, Hurn, and Li proposed a multiple equation time series approach for the short-term load forecasting, reporting an average mean absolute percentage error (MAPE) of 1.36% on an 11-years dataset for the Queensland region in Australia. We present an adaptation of their model to the electrical power load consumption for the whole Quebec province in Canada. More precisely, we take into account two additional meteorological variables — cloudiness and wind speed — on top of temperature, as well as the use of multiple meteorological measurements taken at different locations on the territory. We also consider other minor improvements. Our final model shows an average MAPE score of 1:79% over an 8-years dataset.

Keywords: short-term load forecasting, special days, time series, multiple equations, parallelization, clustering

Procedia PDF Downloads 103
17783 Comparison of Home Ranges of Radio Collared Jaguars (Panthera onca L.) in the Dry Chaco and Wet Chaco of Paraguay

Authors: Juan Facetti, Rocky McBride, Karina Loup

Abstract:

The Chaco Region of Paraguay is a key biodiverse area for the conservation of jaguars (Panthera onca), the largest feline of the Americas. It comprises five eco-regions, which holds important but decreasing populations of this species. The last decades, the expansion of soybean over the Atlantic Forest, forced the translocation of cattle-ranches towards the Chaco. Few studies of Jaguar's population densities in the American hemisphere were done until now. In the region, the specie is listed as vulnerable or threatened and more information is needed to implement any conservation policy. Among the factors that threaten the populations are land-use change, habitat fragmentation, prey depletion and illegal hunting. Two largest eco-regions were studied: the Wet Chaco and the Dry Chaco. From 2002 more than 20 jaguars were captured and fitted with GPS-collar. Data collected from 11 GPS-collars were processed, transformed numerically and finally converted into maps for analyzing. 8.092 locations were determined for four adult females (AF) and one adult male (AM) in the Wet Chaco, and one AF, one juvenile male (JM) and four AM in the Dry Chaco, during 1,867 days. GIS and kernel methodology were used to calculate daily distance of movement, home range-HR (95% isopleth), and core area (considered as 50% isopleth). In the Wet Chaco HR were 56 Km2 and 238 km2 for females and males respectively; while in the Dry Chaco HR were 685 Km2 and 844.5 km2 for females and males respectively, and 172 Km2 for a juvenile. Core areas of individual activity for each jaguar, were on average 11.5 Km2 and 33.55 km2 for AF and AM respectively in the Wet Chaco, while in the Dry Chaco were larger: 115 km2 for five AM and 225 Km2 for an AF and 32.4 Km2 for a JM. In both ecoregions, only one relevant overlap of HR of adults was reported. During the reproduction season, the HR (95% K) of one AM overlapped 49.83% with that of one AF. At the Wet Chaco, the maximum daily distance moved by an AF was 14.5 Km and 11.6 Km for the AM, while the Maximum Mean Daily Moved (MMDM) distance was 5.6 km for an AF and 3.1 km for an AM. At the Dry Chaco, the maximum daily distance for an AF was 61.7Km., 50.9Km for the AM and 6.6 Km for the JM, while the MMDM distance was 13.2 km for an AM and 8.4 km for an AF. This study confirmed that, as the invasion to jaguar habitat increased, it resulted in fragmented landscapes that influence spacing patterns of jaguars. Males used largest HR that of the smaller females and males covers largest distances that of the females. There appeared to be important spatial segregation between not only females but also males. It is likely that the larger areas used by males are partly caused by the sexual dimorphism in body size that entails differences in prey requirements. These could explain the larger distances travelled daily by males.

Keywords: Chaco ecoregions, Jaguar, home range, Panthera onca, Paraguay

Procedia PDF Downloads 302
17782 Correlation between Polysaccharides Molecular Weight Changes and Pectinases Gene Expression during Papaya Ripening

Authors: Samira B. R. Prado, Paulo R. Melfi, Beatriz T. Minguzzi, João P. Fabi

Abstract:

Fruit softening is the main change that occurs during papaya (Carica papaya L.) ripening. It is characterized by the depolymerization of cell wall polysaccharides, especially the pectic fractions, which causes cell wall disassembling. However, it is uncertain how the modification of the two main pectin polysaccharides fractions (water-soluble – WSF, and oxalate-soluble fractions - OSF) accounts for fruit softening. The aim of this work was to correlate molecular weight changes of WSF and OSF with the gene expression of pectin-solubilizing enzymes (pectinases) during papaya ripening. Papaya fruits obtained from a producer were harvest and storage under specific conditions. The fruits were divided in five groups according to days after harvesting. Cell walls from all groups of papaya pulp were isolated and fractionated (WSF and OSF). Expression profiles of pectinase genes were achieved according to the MIQE guidelines (Minimum Information for publication of Quantitative real-time PCR Experiments). The results showed an increased yield and a decreased molecular weight throughout ripening for WSF and OSF. Gene expression data support that papaya softening is achieved by polygalacturonases (PGs) up-regulation, in which their actions might have been facilitated by the constant action of pectinesterases (PMEs). Moreover, BGAL1 gene was up-regulated during ripening with a simultaneous galactose release, suggesting that galactosidases (GALs) could also account for pulp softening. The data suggest that a solubilization of galacturonans and a depolymerization of cell wall components were caused mainly by the action of PGs and GALs.

Keywords: carica papaya, fruit ripening, galactosidases, plant cell wall, polygalacturonases

Procedia PDF Downloads 423