Search results for: backward facing step
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4298

Search results for: backward facing step

398 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience

Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore

Abstract:

INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.

Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)

Procedia PDF Downloads 74
397 Feedback from a Service Evaluation of a Modified Intrauterine Device Insertor: A First Step to a Changement of the Standard of Iud Insertion Procedure

Authors: Desjardin, Michaels, Martinez, Ulmann

Abstract:

Copper IUD is one of the most efficient and cost-effective contraception. However, pain at insertion hampers the use of this method. This is especially unfortunate in nulliparous women, often younger, who are excellent candidates for this contraception, including Emergency Contraception. Standard insertion procedure of a copper IUD usually involves measurement of uterine cavity with an hysterometer and the use of a tenaculum in order to facilitate device insertion. Both procedures lead to patient pain which often constitutes a limitation of the method. To overcome these issues, we have developed a modified insertor combined with a copper IUD. The singular design of the inserter includes a flexible inflatable membrane technology allowing an easy access to the uterine cavity even in case of abnormal uterine positions or narrow cervical canal. Moreover, this inserter makes possible a direct IUD insertion with no hysterometry and no need for tenaculum. To assess device effectiveness and patient-reported pain, a study was conducted at two clinics in Fance with 31 individuals who wanted to use a copper IUD as contraceptive method. IUD insertions have been performed by four healthcare providers. Operators completed questionnaire and evaluated effectiveness of the procedure (including IUD correct fundal placement and other usability questions) as their satisfaction. Patient also completed questionnaire and pain during procedure was measured on a 10-cm Visual Analogue Scale (VAS). Analysis of the questionnaires indicates that correct IUD placement took place in more than 93% of women, which is a standard efficacy rate. It also demonstrates that IUD insertion resulted in no, light or moderate pain predominantly in nulliparous women. No insertion resulted in severe pain (none above 6cm on a 10-cm VAS). This translated by a high level of satisfaction from both patients and practitioners. In addition, this modified inserter allowed a simplification of the insertion procedure: correct fundal placement was ensured with no need for hysterometry (100%) prior to insertion nor for cervical tenaculum to pull on the cervix (90%). Avoidance of both procedures contributed to the decrease in pain during insertion. Taken together, the results of the study demonstrate that this device constitutes a significant advance in the use of copper IUDs for any woman. It allows a simplification of the insertion procedure: there is no need for pre-insertion hysterometry and no need for traction on the cervix with tenaculum. Increased comfort during insertion should allow a wider use of the method for nulliparous women and for emergency contraception. In addition, pain is often underestimated by practitioners, but fear of pain is obviously one of the blocking factors as indicated by the analysis of the questionnaire. This evaluation brings interesting information on the use of this modified inserter for standard copper IUD and promising perspectives to set up a changement in the standard of IUD insertion procedure.

Keywords: contraceptio, IUD, innovation, pain

Procedia PDF Downloads 84
396 Numerical Investigation of the Effects of Surfactant Concentrations on the Dynamics of Liquid-Liquid Interfaces

Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji

Abstract:

Theoretically, there exist two mathematical interfaces (fluid-solid and fluid-fluid) when a liquid film is present on solid surfaces. These interfaces overlap if the mineral surface is oil-wet or mixed wet, and therefore, the effects of disjoining pressure are significant on both boundaries. Hence, dewetting is a necessary process that could detach oil from the mineral surface. However, if the thickness of the thin water film directly in contact with the surface is large enough, disjoining pressure can be thought to be zero at the liquid-liquid interface. Recent studies show that the integration of fluid-fluid interactions with fluid-rock interactions is an important step towards a holistic approach to understanding smart water effects. Experiments have shown that the brine solution can alter the micro forces at oil-water interfaces, and these ion-specific interactions lead to oil emulsion formation. The natural emulsifiers present in crude oil behave as polyelectrolytes when the oil interfaces with low salinity water. Wettability alteration caused by low salinity waterflooding during Enhanced Oil Recovery (EOR) process results from the activities of divalent ions. However, polyelectrolytes are said to lose their viscoelastic property with increasing cation concentrations. In this work, the influence of cation concentrations on the dynamics of viscoelastic liquid-liquid interfaces is numerically investigated. The resultant ion concentrations at the crude oil/brine interfaces were estimated using a surface complexation model. Subsequently, the ion concentration parameter is integrated into a mathematical model to describe its effects on the dynamics of a viscoelastic interfacial thin film. The film growth, stability, and rupture were measured after different time steps for three types of fluids (Newtonian, purely elastic and viscoelastic fluids). The interfacial films respond to exposure time in a similar manner with an increasing growth rate, which resulted in the formation of more droplets with time. Increased surfactant accumulation at the interface results in a higher film growth rate which leads to instability and subsequent formation of more satellite droplets. Purely elastic and viscoelastic properties limit film growth rate and consequent film stability compared to the Newtonian fluid. Therefore, low salinity and reduced concentration of the potential determining ions in injection water will lead to improved interfacial viscoelasticity.

Keywords: liquid-liquid interfaces, surfactant concentrations, potential determining ions, residual oil mobilization

Procedia PDF Downloads 144
395 Bringing the World to Net Zero Carbon Dioxide by Sequestering Biomass Carbon

Authors: Jeffrey A. Amelse

Abstract:

Many corporations aspire to become Net Zero Carbon Carbon Dioxide by 2035-2050. This paper examines what it will take to achieve those goals. Achieving Net Zero CO₂ requires an understanding of where energy is produced and consumed, the magnitude of CO₂ generation, and proper understanding of the Carbon Cycle. The latter leads to the distinction between CO₂ and biomass carbon sequestration. Short reviews are provided for prior technologies proposed for reducing CO₂ emissions from fossil fuels or substitution by renewable energy, to focus on their limitations and to show that none offer a complete solution. Of these, CO₂ sequestration is poised to have the largest impact. It will just cost money, scale-up is a huge challenge, and it will not be a complete solution. CO₂ sequestration is still in the demonstration and semi-commercial scale. Transportation accounts for only about 30% of total U.S. energy demand, and renewables account for only a small fraction of that sector. Yet, bioethanol production consumes 40% of U.S. corn crop, and biodiesel consumes 30% of U.S. soybeans. It is unrealistic to believe that biofuels can completely displace fossil fuels in the transportation market. Bioethanol is traced through its Carbon Cycle and shown to be both energy inefficient and inefficient use of biomass carbon. Both biofuels and CO₂ sequestration reduce future CO₂ emissions from continued use of fossil fuels. They will not remove CO₂ already in the atmosphere. Planting more trees has been proposed as a way to reduce atmospheric CO₂. Trees are a temporary solution. When they complete their Carbon Cycle, they die and release their carbon as CO₂ to the atmosphere. Thus, planting more trees is just 'kicking the can down the road.' The only way to permanently remove CO₂ already in the atmosphere is to break the Carbon Cycle by growing biomass from atmospheric CO₂ and sequestering biomass carbon. Sequestering tree leaves is proposed as a solution. Unlike wood, leaves have a short Carbon Cycle time constant. They renew and decompose every year. Allometric equations from the USDA indicate that theoretically, sequestrating only a fraction of the world’s tree leaves can get the world to Net Zero CO₂ without disturbing the underlying forests. How can tree leaves be permanently sequestered? It may be as simple as rethinking how landfills are designed to discourage instead of encouraging decomposition. In traditional landfills, municipal waste undergoes rapid initial aerobic decomposition to CO₂, followed by slow anaerobic decomposition to methane and CO₂. The latter can take hundreds to thousands of years. The first step in anaerobic decomposition is hydrolysis of cellulose to release sugars, which those who have worked on cellulosic ethanol know is challenging for a number of reasons. The key to permanent leaf sequestration may be keeping the landfills dry and exploiting known inhibitors for anaerobic bacteria.

Keywords: carbon dioxide, net zero, sequestration, biomass, leaves

Procedia PDF Downloads 130
394 Fibrin Glue Reinforcement of Choledochotomy Closure Suture Line for Prevention of Bile Leak in Patients Undergoing Laparoscopic Common Bile Duct Exploration with Primary Closure: A Pilot Study

Authors: Rahul Jain, Jagdish Chander, Anish Gupta

Abstract:

Introduction: Laparoscopic common bile duct exploration (LCBDE) allows cholecystectomy and the removal of common bile duct (CBD) stones to be performed during the same sitting, thereby decreasing hospital stay. CBD exploration through choledochotomy can be closed primarily with an absorbable suture material, but can lead to biliary leakage postoperatively. In this study we tried to find a solution to further lower the incidence of bile leakage by using fibrin glue to reinforce the sutures put on choledochotomy suture line. It has haemostatic and sealing action, through strengthening the last step of the physiological coagulation and biostimulation, which favours the formation of new tissue matrix. Methodology: This study was conducted at a tertiary care teaching hospital in New Delhi, India, from 2011 to 2013. 20 patients with CBD stones documented on MRCP with CBD diameter of 9 mm or more were included in this study. Patients were randomized into two groups namely Group A in which choledochotomy was closed with polyglactin 4-0 suture and suture line reinforced with fibrin glue, and Group ‘B’ in which choledochotomy was closed with polyglactin 4-0 suture alone. Both the groups were evaluated and compared on clinical parameters such as operative time, drain content, drain output, no. of days drain was required, blood loss & transfusion requirements, length of postoperative hospital stay and conversion to open surgery. Results: The operative time for Group A ranged from 60 to 210 min (mean 131.50 min) and Group B 65 to 300 min (mean 140 minutes). The blood loss in group A ranged from 10 to 120 ml (mean 51.50 ml), in group B it ranged from 10 to 200 ml (mean 53.50 ml). In Group A, there was no case of bile leak but there was bile leak in 2 cases in Group B, minimum 0 and maximum 900 ml with a mean of 97 ml and p value of 0.147 with no statistically significant difference in bile leak in test and control groups. The minimum and maximum serous drainage in Group A was nil & 80 ml (mean 11 ml) and in Group B was nil & 270 ml (mean 72.50 ml). The p value came as 0.028 which is statistically significant. Thus serous leakage in Group A was significantly less than in Group B. The drains in Group A were removed from 2 to 4 days (mean: 3 days) while in Group B from 2 to 9 days (mean: 3.9 days). The patients in Group A stayed in hospital post operatively from 3 to 8 days (mean: 5.30) while in Group B it ranged from 3 to 10 days with a mean of 5 days. Conclusion: Fibrin glue application on CBD decreases bile leakage but in statistically insignificant manner. Fibrin glue application on CBD can significantly decrease post operative serous drainage after LCBDE. Fibrin glue application on CBD is safe and easy technique without any significant adverse effects and can help less experienced surgeons performing LCBDE.

Keywords: bile leak, fibrin glue, LCBDE, serous leak

Procedia PDF Downloads 215
393 Determination of the Relative Humidity Profiles in an Internal Micro-Climate Conditioned Using Evaporative Cooling

Authors: M. Bonello, D. Micallef, S. P. Borg

Abstract:

Driven by increased comfort standards, but at the same time high energy consciousness, energy-efficient space cooling has become an essential aspect of building design. Its aims are simple, aiming at providing satisfactory thermal comfort for individuals in an interior space using low energy consumption cooling systems. In this context, evaporative cooling is both an energy-efficient and an eco-friendly cooling process. In the past two decades, several academic studies have been performed to determine the resulting thermal comfort produced by an evaporative cooling system, including studies on temperature profiles, air speed profiles, effect of clothing and personnel activity. To the best knowledge of the authors, no studies have yet considered the analysis of relative humidity (RH) profiles in a space cooled using evaporative cooling. Such a study will determine the effect of different humidity levels on a person's thermal comfort and aid in the consequent improvement designs of such future systems. Under this premise, the research objective is to characterise the resulting different RH profiles in a chamber micro-climate using the evaporative cooling system in which the inlet air speed, temperature and humidity content are varied. The chamber shall be modelled using Computational Fluid Dynamics (CFD) in ANSYS Fluent. Relative humidity shall be modelled using a species transport model while the k-ε RNG formulation is the proposed turbulence model that is to be used. The model shall be validated with measurements taken using an identical test chamber in which tests are to be conducted under the different inlet conditions mentioned above, followed by the verification of the model's mesh and time step. The verified and validated model will then be used to simulate other inlet conditions which would be impractical to conduct in the actual chamber. More details of the modelling and experimental approach will be provided in the full paper The main conclusions from this work are two-fold: the micro-climatic relative humidity spatial distribution within the room is important to consider in the context of investigating comfort at occupant level; and the investigation of a human being's thermal comfort (based on Predicted Mean Vote – Predicted Percentage Dissatisfied [PMV-PPD] values) and its variation with different locations of relative humidity values. The study provides the necessary groundwork for investigating the micro-climatic RH conditions of environments cooled using evaporative cooling. Future work may also target the analysis of ways in which evaporative cooling systems may be improved to better the thermal comfort of human beings, specifically relating to the humidity content around a sedentary person.

Keywords: chamber micro-climate, evaporative cooling, relative humidity, thermal comfort

Procedia PDF Downloads 157
392 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 88
391 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 277
390 Urban Waste Management for Health and Well-Being in Lagos, Nigeria

Authors: Bolawole F. Ogunbodede, Mokolade Johnson, Adetunji Adejumo

Abstract:

High population growth rate, reactive infrastructure provision, inability of physical planning to cope with developmental pace are responsible for waste water crisis in the Lagos Metropolis. Septic tank is still the most prevalent waste-water holding system. Unfortunately, there is a dearth of septage treatment infrastructure. Public waste-water treatment system statistics relative to the 23 million people in Lagos State is worrisome. 1.85 billion Cubic meters of wastewater is generated on daily basis and only 5% of the 26 million population is connected to public sewerage system. This is compounded by inadequate budgetary allocation and erratic power supply in the last two decades. This paper explored community participatory waste-water management alternative at Oworonshoki Municipality in Lagos. The study is underpinned by decentralized Waste-water Management systems in built-up areas. The initiative accommodates 5 step waste-water issue including generation, storage, collection, processing and disposal through participatory decision making in two Oworonshoki Community Development Association (CDA) areas. Drone assisted mapping highlighted building footage. Structured interviews and focused group discussion of land lord associations in the CDA areas provided collaborator platform for decision-making. Water stagnation in primary open drainage channels and natural retention ponds in framing wetlands is traceable to frequent of climate change induced tidal influences in recent decades. Rise in water table resulting in septic-tank leakage and water pollution is reported to be responsible for the increase in the water born infirmities documented in primary health centers. This is in addition to unhealthy dumping of solid wastes in the drainage channels. The effect of uncontrolled disposal system renders surface waters and underground water systems unsafe for human and recreational use; destroys biotic life; and poisons the fragile sand barrier-lagoon urban ecosystems. Cluster decentralized system was conceptualized to service 255 households. Stakeholders agreed on public-private partnership initiative for efficient wastewater service delivery.

Keywords: health, infrastructure, management, septage, well-being

Procedia PDF Downloads 177
389 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 125
388 Evaluation and Preservation of Post-War Concrete Architecture: The Case of Lithuania

Authors: Aušra Černauskienė

Abstract:

The heritage of modern architecture is closely related to the materiality and technology used to implement the buildings. Concrete is one of the most ubiquitous post-war building materials with enormous aesthetic and structural potential that architects have creatively used for everyday buildings and exceptional architectural objects that have survived. Concrete's material, structural, and architectural development over the post-war years has produced a remarkably rich and diverse typology of buildings, for implementation of which unique handicraft skills and industrialized novelties were used. Nonetheless, in the opinion of the public, concrete architecture is often treated as ugly and obsolete, and in Lithuania, it also has negative associations with the scarcity of the Soviet era. Moreover, aesthetic non-appreciation is not the only challenge that concrete architecture meets. It also no longer meets the needs of contemporary requirements: buildings are of poor energy class, have little potential for transformation, and have an obsolete surrounding environment. Thus, as a young heritage, concrete architecture is not yet sufficiently appreciated by society and heritage specialists, as it takes a short time to rethink what they mean from a historical perspective. However, concrete architecture is considered ambiguous but has its character and specificity that needs to be carefully studied in terms of cultural heritage to avoid the risk of poor renovation or even demolition, which has increasingly risen in recent decades in Lithuania. For example, several valuable pieces of post-war concrete architecture, such as the Banga restaurant and the Summer Stage in Palanga, were demolished without understanding their cultural value. Many unique concrete structures and raw concrete surfaces were painted or plastered, paying little attention to the appearance of authentic material. Furthermore, it raises a discussion on how to preserve buildings of different typologies: for example, innovative public buildings in their aesthetic, spatial solutions, and mass housing areas built using precast concrete panels. It is evident that the most traditional preservation strategy, conservation, is not the only option for preserving post-war concrete architecture, and more options should be considered. The first step in choosing the right strategy in each case is an appropriate assessment of the cultural significance. For this reason, an evaluation matrix for post-war concrete architecture is proposed. In one direction, an analysis of different typological groups of buildings is suggested, with the designation of ownership rights; in the other direction – the analysis of traditional value aspects such as aesthetic, technological, and relevant for modern architecture such as social, economic, and sustainability factors. By examining these parameters together, three relevant scenarios for preserving post-war concrete architecture were distinguished: conservation, renovation, and reuse, and they are revealed using examples of concrete architecture in Lithuania.

Keywords: modern heritage, value aspects, typology, conservation, upgrade, reuse

Procedia PDF Downloads 144
387 Designing a Combined Outpatient and Day Treatment Eating Disorder Program for Adolescents and Transitional Aged Youth: A Naturalistic Case Study

Authors: Deanne McArthur, Melinda Wall, Claire Hanlon, Dana Agnolin, Krista Davis, Melanie Dennis, Elizabeth Glidden, Anne Marie Smith, Claudette Thomson

Abstract:

Background and significance: Patients with eating disorders have traditionally been an underserviced population within the publicly-funded Canadian healthcare system. This situation was worsened by the COVID-19 pandemic and accompanying public health measures, such as “lockdowns” which led to increased isolation, changes in routine, and other disruptions. Illness severity and prevalence rose significantly with corresponding increases in patient suffering and poor outcomes. In Ontario, Canada, the provincial government responded by increasing funding for the treatment of eating disorders, including the launch of a new day program at an intermediate, regional health centre that already housed an outpatient treatment service. The funding was received in March 2022. The care team sought to optimize this opportunity by designing a program that would fit well within the resource-constrained context in Ontario. Methods: This case study will detail how the team consulted the literature and sought patient and family input to design a program that optimizes patient outcomes and supports for patients and families while they await treatment. Early steps include a review of the literature, expert consultation and patient and family focus groups. Interprofessional consensus was sought at each step with the team adopting a shared leadership and patient-centered approach. Methods will include interviews, observations and document reviews to detail a rich description of the process undertaken to design the program, including evaluation measures adopted. Interim findings pertaining to the early stages of the program-building process will be detailed as well as early lessons and ongoing evolution of the program and design process. Program implementation and outcome evaluation will continue throughout 2022 and early 2023 with further publication and presentation of study results expected in the summer of 2023. The aim of this study is to contribute to the body of knowledge pertaining to the design and implementation of eating disorder treatment services that combine outpatient and day treatment services in a resource-constrained context.

Keywords: eating disorders, day program, interprofessional, outpatient, adolescents, transitional aged youth

Procedia PDF Downloads 109
386 Effectiveness Factor for Non-Catalytic Gas-Solid Pyrolysis Reaction for Biomass Pellet Under Power Law Kinetics

Authors: Haseen Siddiqui, Sanjay M. Mahajani

Abstract:

Various important reactions in chemical and metallurgical industries fall in the category of gas-solid reactions. These reactions can be categorized as catalytic and non-catalytic gas-solid reactions. In gas-solid reaction systems, heat and mass transfer limitations put an appreciable influence on the rate of the reaction. The consequences can be unavoidable for overlooking such effects while collecting the reaction rate data for the design of the reactor. Pyrolysis reaction comes in this category that involves the production of gases due to the interaction of heat and solid substance. Pyrolysis is also an important step in the gasification process and therefore, the gasification reactivity majorly influenced by the pyrolysis process that produces the char, as a feed for the gasification process. Therefore, in the present study, a non-isothermal transient 1-D model is developed for a single biomass pellet to investigate the effect of heat and mass transfer limitations on the rate of pyrolysis reaction. The obtained set of partial differential equations are firstly discretized using the concept of ‘method of lines’ to obtain a set of ordinary differential equation with respect to time. These equations are solved, then, using MATLAB ode solver ode15s. The model is capable of incorporating structural changes, porosity variation, variation in various thermal properties and various pellet shapes. The model is used to analyze the effectiveness factor for different values of Lewis number and heat of reaction (G factor). Lewis number includes the effect of thermal conductivity of the solid pellet. Higher the Lewis number, the higher will be the thermal conductivity of the solid. The effectiveness factor was found to be decreasing with decreasing Lewis number due to the fact that smaller Lewis numbers retard the rate of heat transfer inside the pellet owing to a lower rate of pyrolysis reaction. G factor includes the effect of the heat of reaction. Since the pyrolysis reaction is endothermic in nature, the G factor takes negative values. The more the negative value higher will be endothermic nature of the pyrolysis reaction. The effectiveness factor was found to be decreasing with more negative values of the G factor. This behavior can be attributed to the fact that more negative value of G factor would result in more energy consumption by the reaction owing to a larger temperature gradient inside the pellet. Further, the analytical expressions are also derived for gas and solid concentrations and effectiveness factor for two limiting cases of the general model developed. The two limiting cases of the model are categorized as the homogeneous model and unreacted shrinking core model.

Keywords: effectiveness factor, G-factor, homogeneous model, lewis number, non-catalytic, shrinking core model

Procedia PDF Downloads 139
385 Welfare Dynamics and Food Prices' Changes: Evidence from Landholding Groups in Rural Pakistan

Authors: Lubna Naz, Munir Ahmad, G. M. Arif

Abstract:

This study analyzes static and dynamic welfare impacts of food price changes for various landholding groups in Pakistan. The study uses three classifications of land ownership, landless, small landowners and large landowners, for analysis. The study uses Panel Survey, Pakistan Rural Household Survey (PRHS) of Pakistan Institute of Development Economics Islamabad, of rural households from two largest provinces (Sindh and Punjab) of Pakistan. The study uses all three waves (2001, 2004 and 2010) of PRHS. This research work makes three important contributions in literature. First, this study uses Quadratic Almost Ideal Demand System (QUAIDS) to estimate demand functions for eight food groups-cereals, meat, milk and milk products, vegetables, cooking oil, pulses and other food. The study estimates food demand functions with Nonlinear Seemingly Unrelated (NLSUR), and employs Lagrange Multiplier and test on the coefficient of squared expenditure term to determine inclusion of squared expenditure term. Test results support the inclusion of squared expenditure term in the food demand model for each of landholding groups (landless, small landowners and large landowners). This study tests for endogeneity and uses control function for its correction. The problem of observed zero expenditure is dealt with a two-step procedure. Second, it creates low price and high price periods, based on literature review. It uses elasticity coefficients from QUAIDS to analyze static and dynamic welfare effects (first and second order Tylor approximation of expenditure function is used) of food price changes across periods. The study estimates compensation variation (CV), money metric loss from food price changes, for landless, small and large landowners. Third, this study compares the findings on welfare implications of food price changes based on QUAIDS with the earlier research in Pakistan, which used other specification of the demand system. The findings indicate that dynamic welfare impacts of food price changes are lower as compared to static welfare impacts for all landholding groups. The static and dynamic welfare impacts of food price changes are highest for landless. The study suggests that government should extend social security nets to landless poor and categorically to vulnerable landless (without livestock) to redress the short-term impact of food price increase. In addition, the government should stabilize food prices and particularly cereal prices in the long- run.

Keywords: QUAIDS, Lagrange multiplier, NLSUR, and Tylor approximation

Procedia PDF Downloads 365
384 Folding of β-Structures via the Polarized Structure-Specific Backbone Charge (PSBC) Model

Authors: Yew Mun Yip, Dawei Zhang

Abstract:

Proteins are the biological machinery that executes specific vital functions in every cell of the human body by folding into their 3D structures. When a protein misfolds from its native structure, the machinery will malfunction and lead to misfolding diseases. Although in vitro experiments are able to conclude that the mutations of the amino acid sequence lead to incorrectly folded protein structures, these experiments are unable to decipher the folding process. Therefore, molecular dynamic (MD) simulations are employed to simulate the folding process so that our improved understanding of the folding process will enable us to contemplate better treatments for misfolding diseases. MD simulations make use of force fields to simulate the folding process of peptides. Secondary structures are formed via the hydrogen bonds formed between the backbone atoms (C, O, N, H). It is important that the hydrogen bond energy computed during the MD simulation is accurate in order to direct the folding process to the native structure. Since the atoms involved in a hydrogen bond possess very dissimilar electronegativities, the more electronegative atom will attract greater electron density from the less electronegative atom towards itself. This is known as the polarization effect. Since the polarization effect changes the electron density of the two atoms in close proximity, the atomic charges of the two atoms should also vary based on the strength of the polarization effect. However, the fixed atomic charge scheme in force fields does not account for the polarization effect. In this study, we introduce the polarized structure-specific backbone charge (PSBC) model. The PSBC model accounts for the polarization effect in MD simulation by updating the atomic charges of the backbone hydrogen bond atoms according to equations derived between the amount of charge transferred to the atom and the length of the hydrogen bond, which are calculated from quantum-mechanical calculations. Compared to other polarizable models, the PSBC model does not require quantum-mechanical calculations of the peptide simulated at every time-step of the simulation and maintains the dynamic update of atomic charges, thereby reducing the computational cost and time while accounting for the polarization effect dynamically at the same time. The PSBC model is applied to two different β-peptides, namely the Beta3s/GS peptide, a de novo designed three-stranded β-sheet whose structure is folded in vitro and studied by NMR, and the trpzip peptides, a double-stranded β-sheet where a correlation is found between the type of amino acids that constitute the β-turn and the β-propensity.

Keywords: hydrogen bond, polarization effect, protein folding, PSBC

Procedia PDF Downloads 270
383 Making Beehives More 'Intelligent'- The Case of Capturing, Reducing, and Managing Bee Pest Infestation in Hives through Modification of Hive Entrance Holes and the Installation of Multiple In-Hive Bee Pest Traps

Authors: Prince Amartey

Abstract:

Bees are clever creatures, thus, capturing bees implies that the hives are intelligent in the sense that they have all of the required circumstances to attract and trap the bees. If the hive goes above and beyond to keep the bees in the hive and to keep the activities of in-hive pests to a minimal in order for the bees to develop to their maximum potential, the hive is becoming or is more 'intelligent'. Some bee pests, such as tiny beehive beetles, are endemic to Africa; however, the way we now extract honey by cutting off the combs and pressing for honey prevents the spread of these bees' insect enemies. However, when we explore entering the commercialization. When freshly collected combs are returned to the hives following the adoption of the frame and other systems, there is a need to consider putting in strategies to manage the accompanying pest concerns that arise with unprotected combs.The techniques for making hives more'intelligent' are thus more important presently, given that the African apicultural business does not wish to encourage the use of pesticides in the hives. This include changing the hive's entrance holes in order to improve the bees' own mechanism for defending the entry sites, as well as collecting pests by setting exterior and in-hive traps to prevent pest infiltration into hives by any means feasible. Material and Methods: The following five (5) mechanisms are proposed to make the hives more 'intelligent.' i. The usage of modified frames with five (5) beetle traps positioned horizontally on the vertical 'legs' to catch the beetle along the combs' surfaces-multiple bee ii. Baited bioelectric frame traps, which has both vertical sections of frame covered with a 3mm mesh that allows pest entry but not bees. The pest is attracted by strips of combs of honey, open brood, pollen on metal plates inserted horizontally on the vertical ‘legs’ of the frames. An electrical ‘mine’ system in place that electrocutes the pests as they step on the wires in the trap to enter the frame trap iii. The ten rounded hive entry holes are adapted as the bees are able to police the entrance to prevent entry of pest. The holes are arranged in two rows, with one on top of the other What Are the Main Contributions of Your Research?-Results Discussions and Conclusions The techniques implemented decrease pest ingress, while in-hive traps capture those that escape entry into the hives. Furthermore, the stand alteration traps larvae and stops their growth into adults. As beekeeping commercialization grows throughout Africa, these initiatives will minimize insect infestation in hives and necessarily enhance honey output.

Keywords: bee pests, modified frames, multiple beetle trap, Baited bioelectric frame traps

Procedia PDF Downloads 79
382 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 170
381 Gendered Experiences of the Urban Space in India as Portrayed by Hindi Cinema: A Quantitative Analysis

Authors: Hugo Ribadeau Dumas

Abstract:

In India, cities represent intense battlefields where patriarchal norms are simultaneously defied and reinforced. While Indian metropolises have witnessed numerous initiatives where women boldly claimed their right to the city, urban spaces still remain disproportionately unfriendly to female city-dwellers. As a result, the presence of strees (women, in Hindi) in the streets remains a socially and politically potent phenomenon. This paper explores how, in India, women engage with the city as compared to men. Borrowing analytical tools from urban geography, it uses Hindi cinema as a medium to map the extent to which activities, attitudes and experiences in urban spaces are highly gendered. The sample consists of 30 movies, both mainstream and independent, which were released between 2010 and 2020, were set in an urban environment and comprised at least one pivotal female character. The paper adopts a quantitative approach, consisting of the scrutiny of close to 3,000 minutes of footage, the labeling and time count of every scene, and the computation of regressions to identify statistical relationships between characters and the way they navigate the city. According to the analysis, female characters spend half less time in the public space than their male counterparts. When they do step out, women do it mostly for utilitarian reasons; inversely, in private spaces or in pseudo-public commercial places – like malls – they indulge in fun activities. For male characters, the pattern is the exact opposite: fun takes place in public and serious work in private. The characters’ attitudes in the streets are also greatly gendered: men spend a significant amount of time immobile, loitering, while women are usually on the move, displaying some sense of purpose. Likewise, body language and emotional expressiveness betray differentiated gender scripts: while women wander in the streets either smiling – in a charming role – or with a hostile face – in a defensive mode – men are more likely to adopt neutral facial expressions. These trends were observed across all movies, although some nuances were identified depending on the character's age group, social background, and city, highlighting that the urban experience is not the same for all women. The empirical pieces of evidence presented in this study are helpful to reflect on the meaning of public space in the context of contemporary Indian cities. The paper ends with a discussion on the link between universal access to public spaces and women's empowerment.

Keywords: cinema, Indian cities, public space, women empowerment

Procedia PDF Downloads 158
380 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 109
379 Storms Dynamics in the Black Sea in the Context of the Climate Changes

Authors: Eugen Rusu

Abstract:

The objective of the work proposed is to perform an analysis of the wave conditions in the Black Sea basin. This is especially focused on the spatial and temporal occurrences and on the dynamics of the most extreme storms in the context of the climate changes. A numerical modelling system, based on the spectral phase averaged wave model SWAN, has been implemented and validated against both in situ measurements and remotely sensed data, all along the sea. Moreover, a successive correction method for the assimilation of the satellite data has been associated with the wave modelling system. This is based on the optimal interpolation of the satellite data. Previous studies show that the process of data assimilation improves considerably the reliability of the results provided by the modelling system. This especially concerns the most sensitive cases from the point of view of the accuracy of the wave predictions, as the extreme storm situations are. Following this numerical approach, it has to be highlighted that the results provided by the wave modelling system above described are in general in line with those provided by some similar wave prediction systems implemented in enclosed or semi-enclosed sea basins. Simulations of this wave modelling system with data assimilation have been performed for the 30-year period 1987-2016. Considering this database, the next step was to analyze the intensity and the dynamics of the higher storms encountered in this period. According to the data resulted from the model simulations, the western side of the sea is considerably more energetic than the rest of the basin. In this western region, regular strong storms provide usually significant wave heights greater than 8m. This may lead to maximum wave heights even greater than 15m. Such regular strong storms may occur several times in one year, usually in the wintertime, or in late autumn, and it can be noticed that their frequency becomes higher in the last decade. As regards the case of the most extreme storms, significant wave heights greater than 10m and maximum wave heights close to 20m (and even greater) may occur. Such extreme storms, which in the past were noticed only once in four or five years, are more recent to be faced almost every year in the Black Sea, and this seems to be a consequence of the climate changes. The analysis performed included also the dynamics of the monthly and annual significant wave height maxima as well as the identification of the most probable spatial and temporal occurrences of the extreme storm events. Finally, it can be concluded that the present work provides valuable information related to the characteristics of the storm conditions and on their dynamics in the Black Sea. This environment is currently subjected to high navigation traffic and intense offshore and nearshore activities and the strong storms that systematically occur may produce accidents with very serious consequences.

Keywords: Black Sea, extreme storms, SWAN simulations, waves

Procedia PDF Downloads 250
378 A Review of Gas Hydrate Rock Physics Models

Authors: Hemin Yuan, Yun Wang, Xiangchun Wang

Abstract:

Gas hydrate is drawing attention due to the fact that it has an enormous amount all over the world, which is almost twice the conventional hydrocarbon reserves, making it a potential alternative source of energy. It is widely distributed in permafrost and continental ocean shelves, and many countries have launched national programs for investigating the gas hydrate. Gas hydrate is mainly explored through seismic methods, which include bottom simulating reflectors (BSR), amplitude blanking, and polarity reverse. These seismic methods are effective at finding the gas hydrate formations but usually contain large uncertainties when applying to invert the micro-scale petrophysical properties of the formations due to lack of constraints. Rock physics modeling links the micro-scale structures of the rocks to the macro-scale elastic properties and can work as effective constraints for the seismic methods. A number of rock physics models have been proposed for gas hydrate modeling, which addresses different mechanisms and applications. However, these models are generally not well classified, and it is confusing to determine the appropriate model for a specific study. Moreover, since the modeling usually involves multiple models and steps, it is difficult to determine the source of uncertainties. To solve these problems, we summarize the developed models/methods and make four classifications of the models according to the hydrate micro-scale morphology in sediments, the purpose of reservoir characterization, the stage of gas hydrate generation, and the lithology type of hosting sediments. Some sub-categories may overlap each other, but they have different priorities. Besides, we also analyze the priorities of different models, bring up the shortcomings, and explain the appropriate application scenarios. Moreover, by comparing the models, we summarize a general workflow of the modeling procedure, which includes rock matrix forming, dry rock frame generating, pore fluids mixing, and final fluid substitution in the rock frame. These procedures have been widely used in various gas hydrate modeling and have been confirmed to be effective. We also analyze the potential sources of uncertainties in each modeling step, which enables us to clearly recognize the potential uncertainties in the modeling. In the end, we explicate the general problems of the current models, including the influences of pressure and temperature, pore geometry, hydrate morphology, and rock structure change during gas hydrate dissociation and re-generation. We also point out that attenuation is also severely affected by gas hydrate in sediments and may work as an indicator to map gas hydrate concentration. Our work classifies rock physics models of gas hydrate into different categories, generalizes the modeling workflow, analyzes the modeling uncertainties and potential problems, which can facilitate the rock physics characterization of gas hydrate bearding sediments and provide hints for future studies.

Keywords: gas hydrate, rock physics model, modeling classification, hydrate morphology

Procedia PDF Downloads 159
377 Assessment of Physical Activity Patterns in Patients with Cardiopulmonary Diseases

Authors: Ledi Neçaj

Abstract:

Objectives: The target of this paper is (1) to explain objectively physical activity model throughout three chronic cardiopulmonary conditions, and (2) to study the connection among physical activity dimensions with disease severity, self-reported physical and emotional functioning, and exercise performance. Material and Methods: This is a cross-sectional study of patients in their domestic environment. Patients with cardiopulmonary diseases were: chronic obstructive pulmonary disease (COPD), (n-63), coronary heart failure (n=60), and patients with implantable cardioverter defibrillator (n=60). Main results measures: Seven ambulatory physical activity dimensions (total steps, percentage time active, percentage time ambulating at low, medium, and hard intensity, maximum cadence for 30 non-stop minutes, and peak performance) have been measured with an accelerometer. Results: Subjects with COPD had the lowest amount of ambulatory physical activity compared with topics with coronary heart failure and cardiac dysrhythmias (all 7 interest dimensions, P<.05); total step counts have been: 5319 as opposed to 7464 as opposed to 9570, respectively. Six-minute walk distance becomes correlated (r=.44-.65, P<.01) with all physical activity dimensions inside the COPD pattern, the most powerful correlations being with total steps and peak performance. In topics with cardiac impairment, maximal oxygen intake had the most effective small to slight correlations with five of the physical activity dimensions (r=.22-.40, P<.05). In contrast, correlations among 6-minute walk test distance and physical activity have been higher (r=.48-.61, P<.01) albeit in a smaller pattern of most effective patients with coronary heart failure. For all three samples, self-reported physical and mental health functioning, age, frame mass index, airflow obstruction, and ejection fraction had both exceptionally small and no significant correlations with physical activity. Conclusions: Findings from this study present a profitable benchmark of physical activity patterns in individuals with cardiopulmonary diseases for comparison with future studies. All seven dimensions of ambulatory physical activity have disfavor between subjects with COPD, heart failure, and cardiac dysrhythmias. Depending on the research or clinical goal, the use of one dimension, such as total steps, may be sufficient. Although physical activity had high correlations with performance on a six-minute walk test relative to other variables, accelerometers-based physical activity monitoring provides unique, important information about real-world behavior in patients with cardiopulmonary not already captured with existing measures.

Keywords: ambulatory physical activity, walking, monitoring, COPD, heart failure, implantable defibrillator, exercise performance

Procedia PDF Downloads 88
376 Combined Effect of Vesicular System and Iontophoresis on Skin Permeation Enhancement of an Analgesic Drug

Authors: Jigar N. Shah, Hiral J. Shah, Praful D. Bharadia

Abstract:

The major challenge faced by formulation scientists in transdermal drug delivery system is to overcome the inherent barriers related to skin permeation. The stratum corneum layer of the skin is working as the rate limiting step in transdermal transport and reduce drug permeation through skin. Many approaches have been used to enhance the penetration of drugs through this layer of the skin. The purpose of this study is to investigate the development and evaluation of a combined approach of drug carriers and iontophoresis as a vehicle to improve skin permeation of an analgesic drug. Iontophoresis is a non-invasive technique for transporting charged molecules into and through tissues by a mild electric field. It has been shown to effectively deliver a variety of drugs across the skin to the underlying tissue. In addition to the enhanced continuous transport, iontophoresis allows dose titration by adjusting the electric field, which makes personalized dosing feasible. Drug carrier could modify the physicochemical properties of the encapsulated molecule and offer a means to facilitate the percutaneous delivery of difficult-to-uptake substances. Recently, there are some reports about using liposomes, microemulsions and polymeric nanoparticles as vehicles for iontophoretic drug delivery. Niosomes, the nonionic surfactant-based vesicles that are essentially similar in properties to liposomes have been proposed as an alternative to liposomes. Niosomes are more stable and free from other shortcoming of liposomes. Recently, the transdermal delivery of certain drugs using niosomes has been envisaged and niosomes have proved to be superior transdermal nanocarriers. Proniosomes overcome some of the physical stability related problems of niosomes. The proniosomal structure was liquid crystalline-compact niosomes hybrid which could be converted into niosomes upon hydration. The combined use of drug carriers and iontophoresis could offer many additional benefits. The system was evaluated for Encapsulation Efficiency, vesicle size, zeta potential, Transmission Electron Microscopy (TEM), DSC, in-vitro release, ex-vivo permeation across skin and rate of hydration. The use of proniosomal gel as a vehicle for the transdermal iontophoretic delivery was evaluated in-vitro. The characteristics of the applied electric current, such as density, type, frequency, and on/off interval ratio were observed. The study confirms the synergistic effect of proniosomes and iontophoresis in improving the transdermal permeation profile of selected analgesic drug. It is concluded that proniosomal gel can be used as a vehicle for transdermal iontophoretic drug delivery under suitable electric conditions.

Keywords: iontophoresis, niosomes, permeation enhancement, transdermal delivery

Procedia PDF Downloads 381
375 Well-Defined Polypeptides: Synthesis and Selective Attachment of Poly(ethylene glycol) Functionalities

Authors: Cristina Lavilla, Andreas Heise

Abstract:

The synthesis of sequence-controlled polymers has received increasing attention in the last years. Well-defined polyacrylates, polyacrylamides and styrene-maleimide copolymers have been synthesized by sequential or kinetic addition of comonomers. However this approach has not yet been introduced to the synthesis of polypeptides, which are in fact polymers developed by nature in a sequence-controlled way. Polypeptides are natural materials that possess the ability to self-assemble into complex and highly ordered structures. Their folding and properties arise from precisely controlled sequences and compositions in their constituent amino acid monomers. So far, solid-phase peptide synthesis is the only technique that allows preparing short peptide sequences with excellent sequence control, but also requires extensive protection/deprotection steps and it is a difficult technique to scale-up. A new strategy towards sequence control in the synthesis of polypeptides is introduced, based on the sequential addition of α-amino acid-N-carboxyanhydrides (NCAs). The living ring-opening process is conducted to full conversion and no purification or deprotection is needed before addition of a new amino acid. The length of every block is predefined by the NCA:initiator ratio in every step. This method yields polypeptides with a specific sequence and controlled molecular weights. A series of polypeptides with varying block sequences have been synthesized with the aim to identify structure-property relationships. All of them are able to adopt secondary structures similar to natural polypeptides, and display properties in the solid state and in solution that are characteristic of the primary structure. By design the prepared polypeptides allow selective modification of individual block sequences, which has been exploited to introduce functionalities in defined positions along the polypeptide chain. Poly(ethylene glycol)(PEG) was the functionality chosen, as it is known to favor hydrophilicity and also yield thermoresponsive materials. After PEGylation, hydrophilicity of the polypeptides is enhanced, and their thermal response in H2O has been studied. Noteworthy differences in the behavior of the polypeptides having different sequences have been found. Circular dichroism measurements confirmed that the α-helical conformation is stable over the examined temperature range (5-90 °C). It is concluded that PEG units are the main responsible of the changes in H-bonding interactions with H2O upon variation of temperature, and the position of these functional units along the backbone is a factor of utmost importance in the resulting properties of the α-helical polypeptides.

Keywords: α-amino acid N-carboxyanhydrides, multiblock copolymers, poly(ethylene glycol), polypeptides, ring-opening polymerization, sequence control

Procedia PDF Downloads 200
374 Development and Testing of Health Literacy Scales for Chinese Primary and Secondary School Students

Authors: Jiayue Guo, Lili You

Abstract:

Background: Children and adolescent health are crucial for both personal well-being and the nation's future health landscape. Health Literacy (HL) is important in enabling adolescents to self-manage their health, a fundamental step towards health empowerment. However, there are limited tools for assessing HL among elementary and junior high school students. This study aims to construct and validate a test-based HL scale for Chinese students, offering a scientific reference for cross-cultural HL tool development. Methods: We conducted a cross-sectional online survey. Participants were recruited from a stratified cluster random sampling method, a total of 4189 Chinese in-school primary and secondary students. The development of the scale was completed by defining the concept of HL, establishing the item indicator system, screening items (7 health content dimensions), and evaluating reliability and validity. Delphi method expert consultation was used to screen items, the Rasch model was conducted for quality analysis, and Cronbach’s alpha coefficient was used to examine the internal consistency. Results: We developed four versions of the HL scale, each with a total score of 100, encompassing seven key health areas: hygiene, nutrition, physical activity, mental health, disease prevention, safety awareness, and digital health literacy. Each version measures four dimensions of health competencies: knowledge, skills, motivation, and behavior. After the second round of expert consultation, the average importance score of each item by experts is 4.5–5.0, and the coefficient of variation is 0.000–0.174. The knowledge and skills dimensions are judgment-based and multiple-choice questions, with the Rasch model confirming unidimensionality at a 5.7% residual variance. The behavioral and motivational dimensions, measured with scale-type items, demonstrated internal consistency via Cronbach's alpha and strong inter-item correlation with KMO values of 0.924 and 0.787, respectively. Bartlett's test of sphericity, with p-values <0.001, further substantiates the scale's reliability. Conclusions: The new test-based scale, designed to evaluate competencies within a multifaceted framework, aligns with current international adolescent literacy theories and China's health education policies, focusing not only on knowledge acquisition but also on the application of health-related thinking and behaviors. The scale can be used as a comprehensive tool for HL evaluation and a reference for other countries.

Keywords: adolescent health, Chinese, health literacy, rasch model, scale development

Procedia PDF Downloads 30
373 Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5

Authors: Ali Zaker, Zhi Chen

Abstract:

Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is the generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups, and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contains four obvious stages, and the main decomposition reaction occurred in the range of 200-600°C. The Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2, and 3 were in the range of 6.67-20.37 kJ for SS; 1.51-6.87 kJ for HZSM5; and 2.29-9.17 kJ for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1, and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with AC and HZSM5 were in the total range of C4-C17, with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds, while with the presence of AC and HZSM5 dropped to 13.02% and 7.3%, respectively. Meanwhile, the generation of benzene, toluene, and xylene (BTX) compounds was significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR, and TGA techniques. Overall, this research demonstrated AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.

Keywords: catalytic pyrolysis, sewage sludge, activated char, HZSM5, bio-oil

Procedia PDF Downloads 179
372 Traditional Lifestyles of the 'Mbuti' Indigenous Communities and the Relationship with the Preservation of Natural Resources in the Landscape of the Okapi Wildlife Reserve in a Context of Socio-cultural Upheaval, Democratic Republic of Congo

Authors: Chales Mumbere Musavandalo, Lucie B. Mugherwa, Gloire Kayitoghera Mulondi, Naanson Bweya, Muyisa Musongora, Francis Lelo Nzuzi

Abstract:

The landscape of the Okapi Wildlife Reserve in the Democratic Republic of Congo harbors a large community of Mbuti indigenous peoples, often described as the guardians of nature. Living in and off the forest has long been a sustainable strategy for preserving natural resources. This strategy, seen as a form of eco-responsible citizenship, draws upon ethnobotanical knowledge passed down through generations. However, these indigenous communities are facing socio-cultural upheaval, which impacts their traditional way of life. This study aims to assess the relationship between the Mbuti indigenous people’s way of life and the preservation of the Okapi Wildlife Reserve. The study was conducted under the assumption that, despite socio-cultural upheavals, the forest and its resources remain central to the Mbuti way of life. The study was conducted in six encampments, three of which were located inside the forest and two in the anthropized zone. The methodological approach initially involved group interviews in six Mbuti encampments. The objective of these interviews was to determine how these people perceive the various services provided by the forest and the resources obtained from this habitat. The technique of using pebbles was adopted to adapt the exercise of weighting services and resources to the understanding of these people. Subsequently, the study carried out ethnobotanical surveys to identify the wood resources frequently used by these communities. This survey was completed in third position by a transect inventory of 1000 m length and 25 m width in order to enhance the understanding of the abundance of these resources around the camps. Two transects were installed in each camp to carry out this inventory. Traditionally, the Mbuti communities sustain their livelihood through hunting, fishing, gathering for self-consumption, and basketry. The Manniophyton fulvum-based net remains the main hunting tool. The primary forest and the swamp are two habitats from which these peoples derive the majority of their resources. However, with the arrival of the Bantu people, who introduced agriculture based on cocoa production, the Mbuti communities started providing services to the Bantu in the form of labor and field guarding. This cultural symbiosis between Mbute and Bantu has also led to non-traditional practices, such as the use of hunting rifles instead of nets and fishing nets instead of creels. The socio-economic and ecological environment in which Mbuti communities live is changing rapidly, including the resources they depend on. By incorporating the time factor into their perception of ecosystem services, only their future (p-value = 0, 0,121), the provision of wood for energy (p-value = 0,1976), and construction (p-value = 0,2548) would be closely associated with the forest in their future. For other services, such as food supply, medicine, and hunting, adaptation to Bantu customs is conceivable. Additionally, the abundance of wood used by the Mbuti people has been high around encampments located in intact forests and low in those in anthropized areas. The traditional way of life of the Mbuti communities is influenced by the cultural symbiosis, reflected in their habits and the availability of resources. The land tenure security of Mbuti areas is crucial to preserve their tradition and forest biodiversity. Conservation efforts in the Okapi Wildlife Reserve must consider this cultural dynamism and promote positive values for the flagship species. The oversight of subsistence hunting is imperative to curtail the transition of these communities to poaching.

Keywords: traditional life, conservation, Indigenous people, cultural symbiosis, forest

Procedia PDF Downloads 60
371 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 130
370 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 353
369 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 173