Search results for: automation control technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17900

Search results for: automation control technology

1010 Innovation and Entrepreneurship in the South of China

Authors: Federica Marangio

Abstract:

This study looks at the triangle of knowledge: research-education-innovation as growth engine of an inclusive and sustainable society, where the research is the strategic process which allows the acquisition of knowledge, innovation appraises the knowledge acquired and the education is the enabling factor of the human capital to create entrepreneurial capital. Where does Italy and China stand in the global geography of innovation? Europe is calling on a smart, inclusive and sustainable growth through a specializing process that looks at the social and economic challenges, able to understand the characteristics of specific geographic areas. It is easily questionable why it is not as simple as it looks to come up with entrepreneurial ideas in all the geographic areas. Seen that the technology plus the human capital should be the means through which is possible to innovate and contribute to the boost of innovation culture, then the young educated people can be seen as the society changing agents and it becomes clear the importance of investigating the skills and competencies that lead to innovation. By starting innovation-based activities, other countries on an international level, are able now to be part of an healthy innovative ecosystem which is the result of a strong growth policy which enables innovation. Analyzing the geography of the innovation on a global scale, comes to light that the innovative entrepreneurship is the process which portrays the competitiveness of the regions in the knowledge-based economy as strategic process able to match intellectual capital and market opportunities. The level of innovative entrepreneurship is not only the result of the endogenous growth ability of the enterprises, but also by significant relations with other enterprises, universities, other centers of education and institutions. To obtain more innovative entrepreneurship is necessary to stimulate more synergy between all these territory actors in order to create, access and value existing and new knowledge ready to be disseminate. This study focuses on individual’s lived experience and the researcher believed that she can’t understand the human actions without understanding the meaning that they attribute to their thoughts, feelings, beliefs and so given she needed to understand the deeper perspectives captured through face-to face interaction. A case study approach will contribute to the betterment of knowledge in this field. This case study will represent a picture of the innovative ecosystem and the entrepreneurial mindset as a key ingredient of endogenous growth and a must for sustainable local and regional development and social cohesion. The case study will be realized analyzing two Chinese companies. A structured set of questions will be asked in order to gain details on what generated success or failure in the different situations with the past and at the moment of the research. Everything will be recorded not to lose important information during the transcription phase. While this work is not geared toward testing a priori hypotheses, it is nevertheless useful to examine whether the projects undertaken by the companies, were stimulated by enabling factors that, as result, enhanced or hampered the local innovation culture.

Keywords: Entrepreneurship, education, geography of innovation, education.

Procedia PDF Downloads 419
1009 A Literature Review Evaluating the Use of Online Problem-Based Learning and Case-Based Learning Within Dental Education

Authors: Thomas Turner

Abstract:

Due to the Covid-19 pandemic alternative ways of delivering dental education were required. As a result, many institutions moved teaching online. The impact of this is poorly understood. Is online problem-based learning (PBL) and case-based learning (CBL) effective and is it suitable in the post-pandemic era? PBL and CBL are both types of interactive, group-based learning which are growing in popularity within many dental schools. PBL was first introduced in the 1960’s and can be defined as learning which occurs from collaborative work to resolve a problem. Whereas CBL encourages learning from clinical cases, encourages application of knowledge and helps prepare learners for clinical practice. To evaluate the use of online PBL and CBL. A literature search was conducted using the CINAHL, Embase, PubMed and Web of Science databases. Literature was also identified from reference lists. Studies were only included from dental education. Seven suitable studies were identified. One of the studies found a high learner and facilitator satisfaction rate with online CBL. Interestingly one study found learners preferred CBL over PBL within an online format. A study also found, that within the context of distance learning, learners preferred a hybrid curriculum including PBL over a traditional approach. A further study pointed to the limitations of PBL within an online format, such as reduced interaction, potentially hindering the development of communication skills and the increased time and technology support required. An audience response system was also developed for use within CBL and had a high satisfaction rate. Interestingly one study found achievement of learning outcomes was correlated with the number of student and staff inputs within an online format. Whereas another study found the quantity of learner interactions were important to group performance, however the quantity of facilitator interactions was not. This review identified generally favourable evidence for the benefits of online PBL and CBL. However, there is limited high quality evidence evaluating these teaching methods within dental education and there appears to be limited evidence comparing online and faceto-face versions of these sessions. The importance of the quantity of learner interactions is evident, however the importance of the quantity of facilitator interactions appears to be questionable. An element to this may be down to the quality of interactions, rather than just quantity. Limitations of online learning regarding technological issues and time required for a session are also highlighted, however as learners and facilitators get familiar with online formats, these may become less of an issue. It is also important learners are encouraged to interact and communicate during these sessions, to allow for the development of communication skills. Interestingly CBL appeared to be preferred to PBL in an online format. This may reflect the simpler nature of CBL, however further research is required to explore this finding. Online CBL and PBL appear promising, however further research is required before online formats of these sessions are widely adopted in the post-pandemic era.

Keywords: case-based learning, online, problem-based learning, remote, virtual

Procedia PDF Downloads 79
1008 The Effect of Different Strength Training Methods on Muscle Strength, Body Composition and Factors Affecting Endurance Performance

Authors: Shaher A. I. Shalfawi, Fredrik Hviding, Bjornar Kjellstadli

Abstract:

The main purpose of this study was to measure the effect of two different strength training methods on muscle strength, muscle mass, fat mass and endurance factors. Fourteen physical education students accepted to participate in this study. The participants were then randomly divided into three groups, traditional training group (TTG), cluster training group (CTG) and control group (CG). TTG consisted of 4 participants aged ( ± SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (178.3 ± 11.9 cm). CTG consisted of 5 participants aged (22.2 ± 3.5 years), body mass (81.0 ± 24.0 kg) and height (180.2 ± 12.3 cm). CG consisted of 5 participants aged (22 ± 2.8 years), body mass (77 ± 19 kg) and height (174 ± 6.7 cm). The participants underwent a hypertrophy strength training program twice a week consisting of 4 sets of 10 reps at 70% of one-repetition maximum (1RM), using barbell squat and barbell bench press for 8 weeks. The CTG performed 2 x 5 reps using 10 s recovery in between repetitions and 50 s recovery between sets, while TTG performed 4 sets of 10 reps with 90 s recovery in between sets. Pre- and post-tests were administrated to assess body composition (weight, muscle mass, and fat mass), 1RM (bench press and barbell squat) and a laboratory endurance test (Bruce Protocol). Instruments used to collect the data were Tanita BC-601 scale (Tanita, Illinois, USA), Woodway treadmill (Woodway, Wisconsin, USA) and Vyntus CPX breath-to-breath system (Jaeger, Hoechberg, Germany). Analysis was conducted at all measured variables including time to peak VO2, peak VO2, heart rate (HR) at peak VO2, respiratory exchange ratio (RER) at peak VO2, and number of breaths per minute. The results indicate an increase in 1RM performance after 8 weeks of training. The change in 1RM squat was for the TTG = 30 ± 3.8 kg, CTG = 28.6 ± 8.3 kg and CG = 10.3 ± 13.8 kg. Similarly, the change in 1RM bench press was for the TTG = 9.8 ± 2.8 kg, CTG = 7.4 ± 3.4 kg and CG = 4.4 ± 3.4 kg. The within-group analysis from the oxygen consumption measured during the incremental exercise indicated that the TTG had only a statistical significant increase in their RER from 1.16 ± 0.04 to 1.23 ± 0.05 (P < 0.05). The CTG had a statistical significant improvement in their HR at peak VO2 from 186 ± 24 to 191 ± 12 Beats Per Minute (P < 0.05) and their RER at peak VO2 from 1.11 ± 0.06 to 1.18 ±0.05 (P < 0.05). Finally, the CG had only a statistical significant increase in their RER at peak VO2 from 1.11 ± 0.07 to 1.21 ± 0.05 (P < 0.05). The between-group analysis showed no statistical differences between all groups in all the measured variables from the oxygen consumption test during the incremental exercise including changes in muscle mass, fat mass, and weight (kg). The results indicate a similar effect of hypertrophy strength training irrespective of the methods of the training used on untrained subjects. Because there were no notable changes in body-composition measures, the results suggest that the improvements in performance observed in all groups is most probably due to neuro-muscular adaptation to training.

Keywords: hypertrophy strength training, cluster set, Bruce protocol, peak VO2

Procedia PDF Downloads 250
1007 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 106
1006 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube

Authors: Alak Kumar Patra, Nilanjan Mitra

Abstract:

Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.

Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite

Procedia PDF Downloads 304
1005 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays

Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner

Abstract:

Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.

Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation

Procedia PDF Downloads 294
1004 A Paradigm Shift in the Cost of Illness of Type 2 Diabetes Mellitus over a Decade in South India: A Prevalence Based Study

Authors: Usha S. Adiga, Sachidanada Adiga

Abstract:

Introduction: Diabetes Mellitus (DM) is one of the most common non-communicable diseases which imposes a large economic burden on the global health-care system. Cost of illness studies in India have assessed the health care cost of DM, but have certain limitations due to lack of standardization of the methods used, improper documentation of data, lack of follow up, etc. The objective of the study was to estimate the cost of illness of uncomplicated versus complicated type 2 diabetes mellitus in Coastal Karnataka, India. The study also aimed to find out the trend of cost of illness of the disease over a decade. Methodology: A prevalence based bottom-up approach study was carried out in two tertiary care hospitals located in Coastal Karnataka after ethical approval. Direct Medical costs like annual laboratory costs, pharmacy cost, consultation charges, hospital bed charges, surgical /intervention costs of 238 diabetics and 340 diabetic patients respectively from two hospitals were obtained from the medical record sections. Patients were divided into six groups, uncomplicated diabetes, diabetic retinopathy(DR), nephropathy(DN), neuropathy(DNeu), diabetic foot(DF), and ischemic heart disease (IHD). Different costs incurred in 2008 and 2017 in these groups were compared, to study the trend of cost of illness. Kruskal Wallis test followed by Dunn’s test were used to compare median costs between the groups and Spearman's correlation test was used for correlation studies. Results: Uncomplicated patients had significantly lower costs (p <0.0001) compared to other groups. Patients with IHD had highest Medical expenses (p < 0.0001), followed by DN and DF (p < 0.0001 ). Annual medical costs incurred were 1.8, 2.76, 2.77, 1.76, and 4.34 times higher in retinopathy, nephropathy, diabetic foot, neuropathy and IHD patients as compared to the cost incurred in managing uncomplicated diabetics. Other costs also showed a similar pattern of rising. A positive correlation was observed between the costs incurred and duration of diabetes, a negative correlation between the glycemic status and cost incurred. The cost incurred in the management of DM in 2017 was found to be elevated 1.4 - 2.7 times when compared to that in 2008. Conclusion: It is evident from the study that the economic burden due to diabetes mellitus is substantial. It poses a significant financial burden on the healthcare system, individual and society as a whole. There is a need for the strategies to achieve optimal glycemic control and operationalize regular and early screening methods for complications so as to reduce the burden of the disease.

Keywords: COI, diabetes mellitus, a bottom up approach, economics

Procedia PDF Downloads 117
1003 Growing Pains and Organizational Development in Growing Enterprises: Conceptual Model and Its Empirical Examination

Authors: Maciej Czarnecki

Abstract:

Even though growth is one of the most important strategic objectives for many enterprises, we know relatively little about this phenomenon. This research contributes to broaden our knowledge of managerial consequences of growth. Scales for measuring organizational development and growing pains were developed. Conceptual model of connections among growth, organizational development, growing pains, selected development factors and financial performance were examined. The research process contained literature review, 20 interviews with managers, examination of 12 raters’ opinions, pilot research and 7 point Likert scale questionnaire research on 138 Polish enterprises employing 50-249 people which increased their employment at least by 50% within last three years. Factor analysis, Pearson product-moment correlation coefficient, student’s t-test and chi-squared test were used to develop scales. High Cronbach’s alpha coefficients were obtained. The verification of correlations among the constructs was carried out with factor correlations, multiple regressions and path analysis. When the enterprise grows, it is necessary to implement changes in its structure, management practices etc. (organizational development) to meet challenges of growing complexity. In this paper, organizational development was defined as internal changes aiming to improve the quality of existing or to introduce new elements in the areas of processes, organizational structure and culture, operational and management systems. Thus; H1: Growth has positive effects on organizational development. The main thesis of the research is that if organizational development does not catch up with growing complexity of growing enterprise, growing pains will arise (lower work comfort, conflicts, lack of control etc.). They will exert a negative influence on the financial performance and may result in serious organizational crisis or even bankruptcy. Thus; H2: Growth has positive effects on growing pains, H3: Organizational development has negative effects on growing pains, H4: Growing pains have negative effects on financial performance, H5: Organizational development has positive effects on financial performance. Scholars considered long lists of factors having potential influence on organizational development. The development of comprehensive model taking into account all possible variables may be beyond the capacity of any researcher or even statistical software used. After literature review, it was decided to increase the level of abstraction and to include following constructs in the conceptual model: organizational learning (OL), positive organization (PO) and high performance factors (HPF). H1a/b/c: OL/PO/HPF has positive effect on organizational development, H2a/b/c: OL/PO/HPF has negative effect on growing pains. The results of hypothesis testing: H1: partly supported, H1a/b/c: supported/not supported/supported, H2: not supported, H2a/b/c: not supported/partly supported/not supported, H3: supported, H4: partly supported, H5: supported. The research seems to be of a great value for both scholars and practitioners. It proved that OL and HPO matter for organizational development. Scales for measuring organizational development and growing pains were developed. Its main finding, though, is that organizational development is a good way of improving financial performance.

Keywords: organizational development, growth, growing pains, financial performance

Procedia PDF Downloads 220
1002 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 127
1001 Perception of Nurses and Caregivers on Fall Preventive Management for Hospitalized Children Based on Ecological Model

Authors: Mirim Kim, Won-Oak Oh

Abstract:

Purpose: The purpose of this study was to identify hospitalized children's fall risk factors, fall prevention status and fall prevention strategies recognized by nurses and caregivers of hospitalized children and present an ecological model for fall preventive management in hospitalized children. Method: The participants of this study were 14 nurses working in medical institutions and having more than one year of child care experience and 14 adult caregivers of children under 6 years of age receiving inpatient treatment at a medical institution. One to one interview was attempted to identify their perception of fall preventive management. Transcribed data were analyzed through latent content analysis method. Results: Fall risk factors in hospitalized children were 'unpredictable behavior', 'instability', 'lack of awareness about danger', 'lack of awareness about falls', 'lack of child control ability', 'lack of awareness about the importance of fall prevention', 'lack of sensitivity to children', 'untidy environment around children', 'lack of personalized facilities for children', 'unsafe facility', 'lack of partnership between healthcare provider and caregiver', 'lack of human resources', 'inadequate fall prevention policy', 'lack of promotion about fall prevention', 'a performanceism oriented culture'. Fall preventive management status of hospitalized children were 'absence of fall prevention capability', 'efforts not to fall', 'blocking fall risk situation', 'limit the scope of children's activity when there is no caregiver', 'encourage caregivers' fall prevention activities', 'creating a safe environment surrounding hospitalized children', 'special management for fall high risk children', 'mutual cooperation between healthcare providers and caregivers', 'implementation of fall prevention policy', 'providing guide signs about fall risk'. Fall preventive management strategies of hospitalized children were 'restrain dangerous behavior', 'inspiring awareness about fall', 'providing fall preventive education considering the child's eye level', 'efforts to become an active subject of fall prevention activities', 'providing customed fall prevention education', 'open communication between healthcare providers and caregivers', 'infrastructure and personnel management to create safe hospital environment', 'expansion fall prevention campaign', 'development and application of a valid fall assessment instrument', 'conversion of awareness about safety'. Conclusion: In this study, the ecological model of fall preventive management for hospitalized children reflects various factors that directly or indirectly affect the fall prevention of hospitalized children. Therefore, these results can be considered as useful baseline data for developing systematic fall prevention programs and hospital policies to prevent fall accident in hospitalized children. Funding: This study was funded by the National Research Foundation of South Korea (grant number NRF-2016R1A2B1015455).

Keywords: fall down, safety culture, hospitalized children, risk factors

Procedia PDF Downloads 167
1000 Combustion Characteristics and Pollutant Emissions in Gasoline/Ethanol Mixed Fuels

Authors: Shin Woo Kim, Eui Ju Lee

Abstract:

The recent development of biofuel production technology facilitates the use of bioethanol and biodiesel on automobile. Bioethanol, especially, can be used as a fuel for gasoline vehicles because the addition of ethanol has been known to increase octane number and reduce soot emissions. However, the wide application of biofuel has been still limited because of lack of detailed combustion properties such as auto-ignition temperature and pollutant emissions such as NOx and soot, which has been concerned mainly on the vehicle fire safety and environmental safety. In this study, the combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally. For auto-ignition temperature and NOx emission, the numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. Also, the response surface method (RSM) was introduced as a design of experiment (DOE), which enables the various combustion properties to be predicted and optimized systematically with respect to three independent variables, i.e., ethanol mole fraction, equivalence ratio and residence time. The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence needs to adjust combustion itself rather than an after-treatment system. RSM results analyzed with three independent variables predict the auto-ignition temperature accurately. However, NOx emission had a big difference between the calculated values and the predicted values using conventional RSM because NOx emission varies very steeply and hence the obtained second order polynomial cannot follow the rates. To relax the increasing rate of dependent variable, NOx emission is taken as common logarithms and worked again with RSM. NOx emission predicted through logarithm transformation is in a fairly good agreement with the experimental results. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires, which is widely used as a fire source of laboratory scale experiments. Three measurement methods were introduced to clarify the pollutant emissions, i.e., various gas concentrations including NOx, gravimetric soot filter sampling for elements analysis and pyrolysis, thermophoretic soot sampling with transmission electron microscopy (TEM). Soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. The morphology of the soot particle was investigated to address the degree of soot maturing. The incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel.

Keywords: gasoline/ethanol fuel, NOx, pool fire, soot, well-stirred reactor (WSR)

Procedia PDF Downloads 212
999 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog

Authors: Ana Flavia Belchior De Andrade

Abstract:

Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.

Keywords: backlog, forensic laboratory, quality management, accreditation

Procedia PDF Downloads 122
998 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 74
997 Using Scilab® as New Introductory Method in Numerical Calculations and Programming for Computational Fluid Dynamics (CFD)

Authors: Nicoly Coelho, Eduardo Vieira Vilas Boas, Paulo Orestes Formigoni

Abstract:

Faced with the remarkable developments in the various segments of modern engineering, provided by the increasing technological development, professionals of all educational areas need to overcome the difficulties generated due to the good understanding of those who are starting their academic journey. Aiming to overcome these difficulties, this article aims at an introduction to the basic study of numerical methods applied to fluid mechanics and thermodynamics, demonstrating the modeling and simulations with its substance, and a detailed explanation of the fundamental numerical solution for the use of finite difference method, using SCILAB, a free software easily accessible as it is free and can be used for any research center or university, anywhere, both in developed and developing countries. It is known that the Computational Fluid Dynamics (CFD) is a necessary tool for engineers and professionals who study fluid mechanics, however, the teaching of this area of knowledge in undergraduate programs faced some difficulties due to software costs and the degree of difficulty of mathematical problems involved in this way the matter is treated only in postgraduate courses. This work aims to bring the use of DFC low cost in teaching Transport Phenomena for graduation analyzing a small classic case of fundamental thermodynamics with Scilab® program. The study starts from the basic theory involving the equation the partial differential equation governing heat transfer problem, implies the need for mastery of students, discretization processes that include the basic principles of series expansion Taylor responsible for generating a system capable of convergence check equations using the concepts of Sassenfeld, finally coming to be solved by Gauss-Seidel method. In this work we demonstrated processes involving both simple problems solved manually, as well as the complex problems that required computer implementation, for which we use a small algorithm with less than 200 lines in Scilab® in heat transfer study of a heated plate in rectangular shape on four sides with different temperatures on either side, producing a two-dimensional transport with colored graphic simulation. With the spread of computer technology, numerous programs have emerged requiring great researcher programming skills. Thinking that this ability to program DFC is the main problem to be overcome, both by students and by researchers, we present in this article a hint of use of programs with less complex interface, thus enabling less difficulty in producing graphical modeling and simulation for DFC with an extension of the programming area of experience for undergraduates.

Keywords: numerical methods, finite difference method, heat transfer, Scilab

Procedia PDF Downloads 388
996 Analysis of Unconditional Conservatism and Earnings Quality before and after the IFRS Adoption

Authors: Monica Santi, Evita Puspitasari

Abstract:

International Financial Reporting Standard (IFRS) has developed the principle based accounting standard. Based on this, IASB then eliminated the conservatism concept within accounting framework. Conservatism concept represents a prudent reaction to uncertainty to try to ensure that uncertainties and risk inherent in business situations are adequately considered. The conservatism concept has two ingredients: conditional conservatism or ex-post (news depending prudence) and unconditional conservatism or ex-ante (news-independent prudence). IFRS in substance disregards the unconditional conservatism because the unconditional conservatism can cause the understatement assets or overstated liabilities, and eventually the financial statement would be irrelevance since the information does not represent the real fact. Therefore, the IASB eliminate the conservatism concept. However, it does not decrease the practice of unconditional conservatism in the financial statement reporting. Therefore, we expected the earnings quality would be affected because of this situation, even though the IFRS implementation was expected to increase the earnings quality. The objective of this study was to provide empirical findings about the unconditional conservatism and the earnings quality before and after the IFRS adoption. The earnings per accrual measure were used as the proxy for the unconditional conservatism. If the earnings per accrual were negative (positive), it meant the company was classified as the conservative (not conservative). The earnings quality was defined as the ability of the earnings in reflecting the future earnings by considering the earnings persistence and stability. We used the earnings response coefficient (ERC) as the proxy for the earnings quality. ERC measured the extant of a security’s abnormal market return in response to the unexpected component of reporting earning of the firm issuing that security. The higher ERC indicated the higher earnings quality. The manufacturing companies listed in the Indonesian Stock Exchange (IDX) were used as the sample companies, and the 2009-2010 period was used to represent the condition before the IFRS adoption, and 2011-2013 was used to represent the condition after the IFRS adoption. Data was analyzed using the Mann-Whitney test and regression analysis. We used the firm size as the control variable with the consideration the firm size would affect the earnings quality of the company. This study had proved that the unconditional conservatism had not changed, either before and after the IFRS adoption period. However, we found the different findings for the earnings quality. The earnings quality had decreased after the IFRS adoption period. This empirical results implied that the earnings quality before the IFRS adoption was higher. This study also had found that the unconditional conservatism positively influenced the earnings quality insignificantly. The findings implied that the implementation of the IFRS had not decreased the unconditional conservatism practice and has not altered the earnings quality of the manufacturing company. Further, we found that the unconditional conservatism did not affect the earnings quality. Eventhough the empirical result shows that the unconditional conservatism gave positive influence to the earnings quality, but the influence was not significant. Thus, we concluded that the implementation of the IFRS did not increase the earnings quality.

Keywords: earnings quality, earnings response coefficient, IFRS Adoption, unconditional conservatism

Procedia PDF Downloads 261
995 The Effects of Alpha-Lipoic Acid Supplementation on Post-Stroke Patients: A Systematic Review and Meta-Analysis of Randomized Controlled Trials

Authors: Hamid Abbasi, Neda Jourabchi, Ranasadat Abedi, Kiarash Tajernarenj, Mehdi Farhoudi, Sarvin Sanaie

Abstract:

Background: Alpha lipoic acid (ALA), fat- and water-soluble, coenzyme with sulfuret content, has received considerable attention for its potential therapeutic role in diabetes, cardiovascular diseases, cancers, and central nervous disease. This investigation aims to evaluate the probable protective effects of ALA in stroke patients. Methods: Based on Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) guidelines, This meta-analysis was performed. The PICO criteria for this meta-analysis were as follows: Population/Patients (P: stroke patients); Intervention (I: ALA); Comparison (C: control); Outcome (O: blood glucose, lipid profile, oxidative stress, inflammatory factors).In addition, Studies that were excluded from the analysis consisted of in vitro, in vivo, and ex vivo studies, case reports, quasi-experimental studies. Scopus, PubMed, Web of Science, EMBASE databases were searched until August 2023. Results: Of 496 records that were screened in the title/abstract stage, 9 studies were included in this meta-analysis. The sample sizes in the included studies vary between 28 and 90. The result of risk of bias was performed via risk of bias (RoB) in randomized-controlled trials (RCTs) based on the second version of the Cochrane RoB assessment tool. 8 studies had a definitely high risk of bias. Discussion: To the best of our knowledge, The present meta-analysis is the first study addressing the effectiveness of ALA supplementation in enhancing post-stroke metabolic markers, including lipid profile, oxidative stress, and inflammatory indices. It is imperative to acknowledge certain potential limitations inherent in this study. First of all, type of treatment (oral or intravenous infusion) could alter the bioavailability of ALA. Our study had restricted evidence regarding the impact of ALA supplementation on included outcomes. Therefore, further research is warranted to develop into the effects of ALA specifically on inflammation and oxidative stress. Funding: The research protocol was approved and supported by the Student Research Committee, Tabriz University of Medical Sciences (grant number: 72825). Registration: This study was registered in the International prospective register of systematic reviews (PROSPERO ID: CR42023461612).

Keywords: alpha-lipoic acid, lipid profile, blood glucose, inflammatory factors, oxidative stress, meta-analysis, post-stroke

Procedia PDF Downloads 65
994 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 189
993 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review

Authors: Hanan Algarni

Abstract:

Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.

Keywords: virtual reality, treadmill, stroke, gait rehabilitation

Procedia PDF Downloads 274
992 Salmonella Emerging Serotypes in Northwestern Italy: Genetic Characterization by Pulsed-Field Gel Electrophoresis

Authors: Clara Tramuta, Floris Irene, Daniela Manila Bianchi, Monica Pitti, Giulia Federica Cazzaniga, Lucia Decastelli

Abstract:

This work presents the results obtained by the Regional Reference Centre for Salmonella Typing (CeRTiS) in a retrospective study aimed to investigate, through Pulsed-field Gel Electrophoresis (PFGE) analysis, the genetic relatedness of emerging Salmonella serotypes of human origin circulating in North-West of Italy. Furthermore, the goal of this work was to create a Regional database to facilitate foodborne outbreak investigation and to monitor them at an earlier stage. A total of 112 strains, isolated from 2016 to 2018 in hospital laboratories, were included in this study. The isolates were previously identified as Salmonella according to standard microbiological techniques and serotyping was performed according to ISO 6579-3 and the Kaufmann-White scheme using O and H antisera (Statens Serum Institut®). All strains were characterized by PFGE: analysis was conducted according to a standardized PulseNet protocol. The restriction enzyme XbaI was used to generate several distinguishable genomic fragments on the agarose gel. PFGE was performed on a CHEF Mapper system, separating large fragments and generating comparable genetic patterns. The agarose gel was then stained with GelRed® and photographed under ultraviolet transillumination. The PFGE patterns obtained from the 112 strains were compared using Bionumerics version 7.6 software with the Dice coefficient with 2% band tolerance and 2% optimization. For each serotype, the data obtained with the PFGE were compared according to the geographical origin and the year in which they were isolated. Salmonella strains were identified as follow: S. Derby n. 34; S. Infantis n. 38; S. Napoli n. 40. All the isolates had appreciable restricted digestion patterns ranging from approximately 40 to 1100 kb. In general, a fairly heterogeneous distribution of pulsotypes has emerged in the different provinces. Cluster analysis indicated high genetic similarity (≥ 83%) among strains of S. Derby (n. 30; 88%), S. Infantis (n. 36; 95%) and S. Napoli (n. 38; 95%) circulating in north-western Italy. The study underlines the genomic similarities shared by the emerging Salmonella strains in Northwest Italy and allowed to create a database to detect outbreaks in an early stage. Therefore, the results confirmed that PFGE is a powerful and discriminatory tool to investigate the genetic relationships among strains in order to monitoring and control Salmonellosis outbreak spread. Pulsed-field gel electrophoresis (PFGE) still represents one of the most suitable approaches to characterize strains, in particular for the laboratories for which NGS techniques are not available.

Keywords: emerging Salmonella serotypes, genetic characterization, human strains, PFGE

Procedia PDF Downloads 108
991 Securing Communities to Bring Sustainable Development, Building Peace and Community Safety: the Ethiopian Community Policing in Amhara National Regional State of Ethiopia

Authors: Demelash Kassaye

Abstract:

The Ethiopia case study reveals a unique model of community policing that has developed from a particular political context in which there is a history of violent political transition, a political structure characterized by ethnic federalism and a political ideology that straddles liberal capitalism and democracy on the one hand, and state-led development and centralized control on the other. The police see community policing as a way to reduce crime. Communities speak about community policing as an opportunity to take on policing responsibilities themselves. Both of these objectives are brought together in an overarching rhetoric of community policing as a way of ‘mobilizing for development’ – whereby the community cooperate with the police to reduce crime, which otherwise inhibits development progress. Community policing in Amhara has primarily involved the placement of Community Police Officers at the kebele level across the State. In addition, a number of structures have also been established in the community, including Advisory Councils, Conflict Resolving Committees, family police and the use of shoe shiner’s and other trade associations as police informants. In addition to these newly created structures, community policing also draws upon pre-existing customary actors, such as militia and elders. Conflict Resolving Committees, Community Police Officers and elders were reported as the most common first ports of call when community members experience a crime. The analysis highlights that the model of community policing in Amhara increased communities’ access to policing services, although this is not always attended by increased access to justice. Community members also indicate that public perceptions of the police have improved since the introduction of community policing, in part due to individual Community Police Officers who have, with limited resources, innovated some impressive strategies to improve safety in their neighborhoods. However, more broadly, community policing has provided the state with more effective surveillance of the population – a potentially oppressive function in the current political context. Ultimately, community policing in Amhara is anything but straightforward. It has been a process of attempting to demonstrate the benefits of newfound (and controversial) ‘democracy’ following years of dictatorship, drawing on generations of customary dispute resolution, providing both improved access to security for communities and an enhanced surveillance capacity for the state. For external actors looking to engage in community policing, this case study reveals the importance of close analysis in assessing potential merits, risks and entry points of programming. Factors found to be central in shaping the nature of community policing in the Amhara case include the structure of the political system, state-society relations, cultures dispute resolution and political ideology.

Keywords: community policing, community, militias, ethiopia

Procedia PDF Downloads 133
990 The Investigation of Effect of Alpha Lipoic Acid against Damage on Neonatal Rat Lung to Maternal Tobacco Smoke Exposure

Authors: Elif Erdem, Nalan Kaya, Gonca Ozan, Durrin Ozlem Dabak, Enver Ozan

Abstract:

This study was carried out to determine the histological and biochemical changes in the lungs of the rat pups exposed to tobacco smoke during pregnancy period and to investigate the protective effects of alpha lipoic acid, which is administered during pregnancy, on these changes. In our study, 24 six-week old Spraque-Dawley female rats weighing 160 ± 10 g were used (n:7). Rats were randomly divided into four equal groups: group I (control), group II (tobacco smoke), group III (tobacco smoke + alpha lipoic acid) and group IV (alpha lipoic acid). Rats in the group II, group III were exposed to tobacco smoke twice a day for one hour starting from eight weeks before mating and during pregnancy. In addition to tobacco smoke, 20 mg/kg of alpha lipoic acid was administered via oral gavage to the rats in the group III. Only alpha lipoic acid was administered to the rats in the group IV. Once after the delivery, all administrations were stopped. On the 7 and 21th days, the seven pups of all groups were decapitated. A portion of the lung was taken and stained with HE, PAS and Masson. In addition to immunohistochemical staining of surfactant protein A, vascular endothelial growth factor, caspase-3, TUNEL method was also used to determine apoptosis. Biochemical analyzes were performed with some part of the lung tissue specimens. In the histological evaluations performed under light microscopy, inflammatory cell increase, hemorrhagic areas, edema, interalveolar septal thickening, alveolar numbers decrease, degeneration of some bronchi and bronchial epithelium, epithelial cells that were fallen into the lumen and hyaline membrane formation were observed in tobacco smoke group. These findings were ameliorated in tobacco smoke + ALA group. Hyaline membrane formation was not detected in this group. The TUNEL positive cell numbers a significant increase was detected in the tobacco smoke group, whereas a significant decrease was detected in the tobacco smoke + ALA group. In terms of the immunoreactivity of both SP-A and VEGF, a significant decrease was observed in the tobacco smoke group, and a significant increase was observed in the tobacco smoke + ALA group. Regarding the immunoreactivity of caspase-3, there was a significant increase in the group of tobacco smoke and a significant decrease in the group of tobacco smoke + ALA. The malondialdehyde levels were determined to be significantly increased in the tobacco smoke group, and a significant decreased in the tobacco smoke + ALA. Glutathione and superoxide dismutase enzyme activities showed a significant decrease in the group of tobacco smoke and a significant increase in the tobacco smoke + ALA group. In conclusion, we suggest that the exposure to tobacco smoke during pregnancy leads to morphological, histopathological and functional changes on lung development by causing oxidative damage in lung tissues of neonatal rats and the maternal use of alpha lipoic acid can provide a protective effect on the neonatal lung development against this oxidative stress originating from tobacco smoke.

Keywords: alpha lipoic acid, lung, neonate, tobacco smoke, pregnancy

Procedia PDF Downloads 211
989 Advancements in Electronic Sensor Technologies for Tea Quality Evaluation

Authors: Raana Babadi Fathipour

Abstract:

Tea, second only to water in global consumption rates, holds a significant place as the beverage of choice for many around the world. The process of fermenting tea leaves plays a crucial role in determining its ultimate quality, traditionally assessed through meticulous observation by tea tasters and laboratory analysis. However, advancements in technology have paved the way for innovative electronic sensing platforms like the electronic nose (e-nose), electronic tongue (e-tongue), and electronic eye (e-eye). These cutting-edge tools, coupled with sophisticated data processing algorithms, not only expedite the assessment of tea's sensory qualities based on consumer preferences but also establish new benchmarks for this esteemed bioactive product to meet burgeoning market demands worldwide. By harnessing intricate data sets derived from electronic signals and deploying multivariate statistical techniques, these technological marvels can enhance accuracy in predicting and distinguishing tea quality with unparalleled precision. In this contemporary exploration, a comprehensive overview is provided of the most recent breakthroughs and viable solutions aimed at addressing forthcoming challenges in the realm of tea analysis. Utilizing bio-mimicking Electronic Sensory Perception systems (ESPs), researchers have developed innovative technologies that enable precise and instantaneous evaluation of the sensory-chemical attributes inherent in tea and its derivatives. These sophisticated sensing mechanisms are adept at deciphering key elements such as aroma, taste, and color profiles, transitioning valuable data into intricate mathematical algorithms for classification purposes. Through their adept capabilities, these cutting-edge devices exhibit remarkable proficiency in discerning various teas with respect to their distinct pricing structures, geographic origins, harvest epochs, fermentation processes, storage durations, quality classifications, and potential adulteration levels. While voltammetric and fluorescent sensor arrays have emerged as promising tools for constructing electronic tongue systems proficient in scrutinizing tea compositions, potentiometric electrodes continue to serve as reliable instruments for meticulously monitoring taste dynamics within different tea varieties. By implementing a feature-level fusion strategy within predictive models, marked enhancements can be achieved regarding efficiency and accuracy levels. Moreover, by establishing intrinsic linkages through pattern recognition methodologies between sensory traits and biochemical makeup found within tea samples, further strides are made toward enhancing our understanding of this venerable beverage's complex nature.

Keywords: classifier system, tea, polyphenol, sensor, taste sensor

Procedia PDF Downloads 2
988 Assessment of a Rapid Detection Sensor of Faecal Pollution in Freshwater

Authors: Ciprian Briciu-Burghina, Brendan Heery, Dermot Brabazon, Fiona Regan

Abstract:

Good quality bathing water is a highly desirable natural resource which can provide major economic, social, and environmental benefits. Both in Ireland and Europe, such water bodies are managed under the European Directive for the management of bathing water quality (BWD). The BWD aims mainly: (i) to improve health protection for bathers by introducing stricter standards for faecal pollution assessment (E. coli, enterococci), (ii) to establish a more pro-active approach to the assessment of possible pollution risks and the management of bathing waters, and (iii) to increase public involvement and dissemination of information to the general public. Standard methods for E. coli and enterococci quantification rely on cultivation of the target organism which requires long incubation periods (from 18h to a few days). This is not ideal when immediate action is required for risk mitigation. Municipalities that oversee the bathing water quality and deploy appropriate signage have to wait for laboratory results. During this time, bathers can be exposed to pollution events and health risks. Although forecasting tools exist, they are site specific and as consequence extensive historical data is required to be effective. Another approach for early detection of faecal pollution is the use of marker enzymes. β-glucuronidase (GUS) is a widely accepted biomarker for E. coli detection in microbiological water quality control. GUS assay is particularly attractive as they are rapid, less than 4 h, easy to perform and they do not require specialised training. A method for on-site detection of GUS from environmental samples in less than 75 min was previously demonstrated. In this study, the capability of ColiSense as an early warning system for faecal pollution in freshwater is assessed. The system successfully detected GUS activity in all of the 45 freshwater samples tested. GUS activity was found to correlate linearly with E. coli (r2=0.53, N=45, p < 0.001) and enterococci (r2=0.66, N=45, p < 0.001) Although GUS is a marker for E. coli, a better correlation was obtained for enterococci. For this study water samples were collected from 5 rivers in the Dublin area over 1 month. This suggests a high diversity of pollution sources (agricultural, industrial, etc) as well as point and diffuse pollution sources were captured in the sample size. Such variety in the source of E. coli can account for different GUS activities/culturable cell and different ratios of viable but not culturable to viable culturable bacteria. A previously developed protocol for the recovery and detection of E. coli was coupled with a miniaturised fluorometer (ColiSense) and the system was assessed for the rapid detection FIB in freshwater samples. Further work will be carried out to evaluate the system’s performance on seawater samples.

Keywords: faecal pollution, β-glucuronidase (GUS), bathing water, E. coli

Procedia PDF Downloads 284
987 A Mixed-Method Study Exploring Expressive Writing as a Brief Intervention Targeting Mental Health and Wellbeing in Higher Education Students: A Focus on the Qualitative Findings

Authors: Deborah Bailey-Rodriguez, Maria Paula Valdivieso Rueda, Gemma Reynolds

Abstract:

In recent years, the mental health of Higher Education (HE) students has been a growing concern. This has been further exacerbated by the stresses associated with the Covid-19 pandemic, placing students at even greater risk of developing mental health issues. Support available to students in HE tends to follow an established and traditional route. The demands for counseling services have grown, not only with the increase in student numbers but with the number of students seeking support for mental health issues, with 94% of HE institutions recently reporting an increase in the need for counseling services. One way of improving the well-being and mental health of HE students is through the use of brief interventions, such as expressive writing (EW). This intervention involves encouraging individuals to write continuously for at least 15-20 minutes for three to five sessions (often on consecutive days) about their deepest thoughts and feelings to explore significant personal experiences in a meaningful way. Given the brevity, simplicity and cost-effectiveness of EW, this intervention has considerable potential as an intervention for HE populations. The current study, therefore, employed a mixed-methods design to explore the effectiveness of EW in reducing anxiety, general stress, academic stress and depression in HE students while improving well-being. HE students at MDX were randomly assigned to one of three conditions: (1) The UniExp-EW group was required to write about their emotions and thoughts about any stressors they have faced that are directly relevant to their university experience (2) The NonUniExp-EW group was required to write about their emotions and thoughts about any stressors that are NOT directly relevant to their university experience, and (3) The Control group were required to write about how they spent their weekend, with no reference to thoughts or emotions, and without thinking about university. Participants were required to carry out the EW intervention for 15 minutes per day for four consecutive days. Baseline mental health and well-being measures were taken before the intervention via a battery of standardized questionnaires. Following completion of the intervention on day four, participants were required to complete the questionnaires a second time and again one week later. Participants were also invited to attend focus groups to discuss their experience of the intervention. This will allow an in-depth investigation into students’ perceptions of EW as an effective intervention to determine whether they would choose to use this intervention in the future. Preliminary findings will be discussed at the conference as well as a discussion of the important implications of the findings. The study is fundamental because if EW is an effective intervention for improving mental health and well-being in HE students, its brevity and simplicity mean it can be easily implemented and can be freely available to students. Improving the mental health and well-being of HE students can have knock-on implications for improving academic skills and career development.

Keywords: expressive writing, higher education, psychology in education, mixed-methods, mental health, academic stress

Procedia PDF Downloads 70
986 The Dilemma of Translanguaging Pedagogy in a Multilingual University in South Africa

Authors: Zakhile Somlata

Abstract:

In the context of international linguistic and cultural diversity, all languages can be used for all purposes. Africa in general and South Africa, in particular, is not an exception to multilingual and multicultural society. The multilingual and multicultural nature of South African society has a direct bearing to the heterogeneity of South African Universities in general. Universities as the centers of research, innovation, and transformation of the entire society should be at the forefront in leading multilingualism. The universities in South Africa had been using English and to a certain extent Afrikaans as the only academic languages during colonialism and apartheid regime. The democratic breakthrough of 1994 brought linguistic relief in South Africa. The Constitution of the Republic of South Africa recognizes 11 official languages that should enjoy parity of esteem for the realization of multilingualism. The elevation of the nine previously marginalized indigenous African languages as academic languages in higher education is central to multilingualism. It is high time that Afrocentric model instead of Eurocentric model should be the one which underpins education system in South Africa at all levels. Almost all South African universities have their language policies that seek to promote access and success of students through multilingualism, but the main dilemma is the implementation of language policies. This study is significant to respond to two objectives: (i) To evaluate how selected institutions use language policies for accessibility and success of students. (ii) To study how selected universities integrate African languages for both academic and administrative purposes. This paper reflects the language policy practices in one selected University of Technology (UoT) in South Africa. The UoT has its own language policy which depicts linguistic diversity of the institution and its commitment to promote multilingualism. Translanguaging pedagogy which accommodates minority languages' usage in the teaching and learning process plays a pivotal role in promoting multilingualism. This research paper employs mixed methods (quantitative and qualitative research) approach. Qualitative data has been collected from the key informants (insiders and experts), while quantitative data has been collected from a cohort of third-year students. A mixed methods approach with its convergent parallel design allows the data to be collected separately, analysed separately but with the comparison of the results. Language development initiatives have been discussed within the framework of language policy and policy implementation strategies. Theoretically, this paper is rooted in language as a problem, language as a right and language as a resource. The findings demonstrate that despite being a multilingual institution, there is a perpetuation of marginalization of African languages to be used as academic languages. Findings further display the hegemony of English. The promotion of status quo compromises the promotion of multilingualism, Africanization of Higher Education and intellectualization of indigenous African languages in South Africa under a democratic dispensation.

Keywords: afro-centric model, hegemony of English, language as a resource, translanguaging pedagogy

Procedia PDF Downloads 193
985 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 41
984 Investigating Sediment-Bound Chemical Transport in an Eastern Mediterranean Perennial Stream to Identify Priority Pollution Sources on a Catchment Scale

Authors: Felicia Orah Rein Moshe

Abstract:

Soil erosion has become a priority global concern, impairing water quality and degrading ecosystem services. In Mediterranean climates, following a long dry period, the onset of rain occurs when agricultural soils are often bare and most vulnerable to erosion. Early storms transport sediments and sediment-bound pollutants into streams, along with dissolved chemicals. This results in loss of valuable topsoil, water quality degradation, and potentially expensive dredged-material disposal costs. Information on the provenance of fine sediment and priority sources of adsorbed pollutants represents a critical need for developing effective control strategies aimed at source reduction. Modifying sediment traps designed for marine systems, this study tested a cost-effective method to collect suspended sediments on a catchment scale to characterize stream water quality during first-flush storm events in a flashy Eastern Mediterranean coastal perennial stream. This study investigated the Kishon Basin, deploying sediment traps in 23 locations, including 4 in the mainstream and one downstream in each of 19 tributaries, enabling the characterization of sediment as a vehicle for transporting chemicals. Further, it enabled direct comparison of sediment-bound pollutants transported during the first-flush winter storms of 2020 from each of 19 tributaries, allowing subsequent ecotoxicity ranking. Sediment samples were successfully captured in 22 locations. Pesticides, pharmaceuticals, nutrients, and metal concentrations were quantified, identifying a total of 50 pesticides, 15 pharmaceuticals, and 22 metals, with 16 pesticides and 3 pharmaceuticals found in all 23 locations, demonstrating the importance of this transport pathway. Heavy metals were detected in only one tributary, identifying an important watershed pollution source with immediate potential influence on long-term dredging costs. Simultaneous sediment sampling at first flush storms enabled clear identification of priority tributaries and their chemical contributions, advancing a new national watershed monitoring approach, facilitating strategic plan development based on source reduction, and advancing the goal of improving the farm-stream interface, conserving soil resources, and protecting water quality.

Keywords: adsorbed pollution, dredged material, heavy metals, suspended sediment, water quality monitoring

Procedia PDF Downloads 109
983 Harvesting Value-added Products Through Anodic Electrocatalytic Upgrading Intermediate Compounds Utilizing Biomass to Accelerating Hydrogen Evolution

Authors: Mehran Nozari-Asbemarz, Italo Pisano, Simin Arshi, Edmond Magner, James J. Leahy

Abstract:

Integrating electrolytic synthesis with renewable energy makes it feasible to address urgent environmental and energy challenges. Conventional water electrolyzers concurrently produce H₂ and O₂, demanding additional procedures in gas separation to prevent contamination of H₂ with O₂. Moreover, the oxygen evolution reaction (OER), which is sluggish and has a low overall energy conversion efficiency, does not deliver a significant value product on the electrode surface. Compared to conventional water electrolysis, integrating electrolytic hydrogen generation from water with thermodynamically more advantageous aqueous organic oxidation processes can increase energy conversion efficiency and create value-added compounds instead of oxygen at the anode. One strategy is to use renewable and sustainable carbon sources from biomass, which has a large annual production capacity and presents a significant opportunity to supplement carbon sourced from fossil fuels. Numerous catalytic techniques have been researched in order to utilize biomass economically. Because of its safe operating conditions, excellent energy efficiency, and reasonable control over production rate and selectivity using electrochemical parameters, electrocatalytic upgrading stands out as an appealing choice among the numerous biomass refinery technologies. Therefore, we propose a broad framework for coupling H2 generation from water splitting with oxidative biomass upgrading processes. Four representative biomass targets were considered for oxidative upgrading that used a hierarchically porous CoFe-MOF/LDH @ Graphite Paper bifunctional electrocatalyst, including glucose, ethanol, benzyl, furfural, and 5-hydroxymethylfurfural (HMF). The potential required to support 50 mA cm-2 is considerably lower than (~ 380 mV) the potential for OER. All four compounds can be oxidized to yield liquid byproducts with economic benefit. The electrocatalytic oxidation of glucose to the value-added products, gluconic acid, glucuronic acid, and glucaric acid, was examined in detail. The cell potential for combined H₂ production and glucose oxidation was substantially lower than for water splitting (1.44 V(RHE) vs. 1.82 V(RHE) for 50 mA cm-2). In contrast, the oxidation byproduct at the anode was significantly more valuable than O₂, taking advantage of the more favorable glucose oxidation in comparison to the OER. Overall, such a combination of HER and oxidative biomass valorization using electrocatalysts prevents the production of potentially explosive H₂/O₂mixtures and produces high-value products at both electrodes with lower voltage input, thereby increasing the efficiency and activity of electrocatalytic conversion.

Keywords: biomass, electrocatalytic, glucose oxidation, hydrogen evolution

Procedia PDF Downloads 97
982 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 158
981 The Charge Exchange and Mixture Formation Model in the ASz-62IR Radial Aircraft Engine

Authors: Pawel Magryta, Tytus Tulwin, Paweł Karpiński

Abstract:

The ASz62IR engine is a radial aircraft engine with 9 cylinders. This object is produced by the Polish company WSK "PZL-KALISZ" S.A. This is engine is currently being developed by the above company and Lublin University of Technology. In order to provide an effective work of the technological development of this unit it was decided to made the simulation model. The model of ASz-62IR was developed with AVL BOOST software which is a tool dedicated to the one-dimensional modeling of internal combustion engines. This model can be used to calculate parameters of an air and fuel flow in an intake system including charging devices as well as combustion and exhaust flow to the environment. The main purpose of this model is the analysis of the charge exchange and mixture formation in this engine. For this purpose, the model consists of elements such: as air inlet, throttle system, compressor connector, charging compressor, inlet pipes and injectors, outlet pipes, fuel injection and model of fuel mixing and evaporation. The model of charge exchange and mixture formation was based on the model of mass flow rate in intake and exhaust pipes, and also on the calculation of gas properties values like gas constant or thermal capacity. This model was based on the equations to describe isentropic flow. The energy equation to describe flow under steady conditions was transformed into the mass flow equation. In the model the flow coefficient μσ was used, that varies with the stroke/valve opening and was determined in a steady flow state. The geometry of the inlet channels and other key components was mapped with reference to the technical documentation of the engine and empirical measurements of the structure elements. The volume of elements on the charge flow path between the air inlet and the exhaust outlet was measured by the CAD mapping of the structure. Taken from the technical documentation, the original characteristics of the compressor engine was entered into the model. Additionally, the model uses a general model for the transport of chemical compounds of the mixture. There are 7 compounds used, i.e. fuel, O2, N2, CO2, H2O, CO, H2. A gasoline fuel of a calorific value of 43.5 MJ/kg and an air mass fraction for stoichiometric mixture of 14.5 were used. Indirect injection into the intake manifold is used in this model. The model assumes the following simplifications: the mixture is homogenous at the beginning of combustion, accordingly, mixture stoichiometric coefficient A/F remains constant during combustion, combusted and non-combusted charges show identical pressures and temperatures although their compositions change. As a result of the simulation studies based on the model described above, the basic parameters of combustion process, charge exchange, mixture formation in cylinders were obtained. The AVL Boost software is very useful for the piston engine performance simulations. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: aviation propulsion, AVL Boost, engine model, charge exchange, mixture formation

Procedia PDF Downloads 340