Search results for: restructuringdigital factory model
9576 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1309575 The Chemical Transport Mechanism of Emitter Micro-Particles in Tungsten Electrode: A Metallurgical Study
Authors: G. Singh, H.Schuster, U. Füssel
Abstract:
The stability of electric arc and durability of electrode tip used in Tungsten Inert Gas (TIG) welding demand a metallurgical study about the chemical transport mechanism of emitter oxide particles in tungsten electrode during its real welding conditions. The tungsten electrodes doped with emitter oxides of rare earth oxides such as La₂O₃, Th₂O₃, Y₂O₃, CeO₂ and ZrO₂ feature a comparatively lower work function than tungsten and thus have superior emission characteristics due to lesser surface temperature of the cathode. The local change in concentration of these emitter particles in tungsten electrode due to high temperature diffusion (chemical transport) can change its functional properties like electrode temperature, work function, electron emission, and stability of the electrode tip shape. The resulting increment in tip surface temperature results in the electrode material loss. It was also observed that the tungsten recrystallizes to large grains at high temperature. When the shape of grain boundaries are granular in shape, the intergranular diffusion of oxide emitter particles takes more time to reach the electrode surface. In the experimental work, the microstructure of the used electrode's tip surface will be studied by scanning electron microscope and reflective X-ray technique in order to gauge the extent of the diffusion and chemical reaction of emitter particles. Besides, a simulated model is proposed to explain the effect of oxide particles diffusion on the electrode’s microstructure, electron emission characteristics, and electrode tip erosion. This model suggests metallurgical modifications in tungsten electrode to enhance its erosion resistance.Keywords: rare-earth emitter particles, temperature-dependent diffusion, TIG welding, Tungsten electrode
Procedia PDF Downloads 1899574 The Development of E-Commerce in Mexico: An Econometric Analysis
Authors: Alma Lucero Ortiz, Mario Gomez
Abstract:
Technological advances contribute to the well-being of humanity by allowing man to perform in a more efficient way. Technology offers tangible advantages to countries with the adoption of information technologies, communication, and the Internet in all social and productive sectors. The Internet is a networking infrastructure that allows the communication of people throughout the world, exceeding the limits of time and space. Nowadays the internet has changed the way of doing business leading to a digital economy. In this way, e-commerce has emerged as a commercial transaction conducted over the Internet. For this inquiry e-commerce is seen as a source of economic growth for the country. Thereby, these research aims to answer the research question, which are the main variables that have affected the development of e-commerce in Mexico. The research includes a period of study from 1990 to 2017. This inquiry aims to get insight on how the independent variables influence the e-commerce development. The independent variables are information infrastructure construction, urbanization level, economic level, technology level, human capital level, educational level, standards of living, and price index. The results suggest that the independent variables have an impact on development of the e-commerce in Mexico. The present study is carried out in five parts. After the introduction, in the second part, a literature review about the main qualitative and quantitative studies to measure the variables subject to the study is presented. After, an empirical study is applied through time series data, and to process the data an econometric model is performed. In the fourth part, the analysis and discussion of results are presented, and finally, some conclusions are included.Keywords: digital economy, e-commerce, econometric model, economic growth, internet
Procedia PDF Downloads 2439573 Predictions of Thermo-Hydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations
Authors: Tai Yuan Yu, Pei-Jen Wang
Abstract:
Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings, abbreviated as GFBs, are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional, abbreviated as 3D, fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.Keywords: fluid-structure interaction, multi-physics simulations, gas foil bearing, oil-free, transient thermo-hydrodynamic
Procedia PDF Downloads 1649572 Student Feedback of a Major Curricular Reform Based on Course Integration and Continuous Assessment in Electrical Engineering
Authors: Heikki Valmu, Eero Kupila, Raisa Vartia
Abstract:
A major curricular reform was implemented in Metropolia UAS in 2014. The teaching was to be based on larger course entities and collaborative pedagogy. The most thorough reform was conducted in the department of electrical engineering and automation technology. It has been already shown that the reform has been extremely successful with respect to student progression and drop-out rate. The improvement of the results has been much more significant in this department compared to the other engineering departments making only minor pedagogical changes. In the beginning of the spring term of 2017, a thorough student feedback project was conducted in the department. The study consisted of thirty questions about the implementation of the curriculum, the student workload and other matters related to student satisfaction. The reply rate was more than 40%. The students were divided to four different categories: first year students [cat.1] and students of all the three different majors [categories 2-4]. These categories were found valid since all the students have the same course structure in the first two semesters after which they may freely select the major. All staff members are divided into four teams respectively. The curriculum consists of consecutive 15 credit (ECTS) courses each taught by a group of teachers (3-5). There are to be no end exams and continuous assessment is to be employed. In 2014 the different teacher groups were encouraged to employ innovatively different assessment methods within the given specs. One of these methods has been since used in categories 1 and 2. These students have to complete a number of compulsory tasks each week to pass the course and the actual grade is defined by a smaller number of tests throughout the course. The tasks vary from homework assignments, reports and laboratory exercises to larger projects and the actual smaller tests are usually organized during the regular lecture hours. The teachers of the other two majors have been pedagogically more conservative. The student progression has been better in categories 1 and 2 compared to categories 3 and 4. One of the main goals of this survey was to analyze the reasons for the difference and the assessment methods in detail besides the general student satisfaction. The results show that in the categories following more strictly the specified assessment model much more versatile assessment methods are used and the basic spirit of the new pedagogy is followed. Also, the student satisfaction is significantly better in categories 1 and 2. It may be clearly stated that continuous assessment and teacher cooperation improve the learning outcomes, student progression as well as student satisfaction. Too much academic freedom seems to lead to worse results [cat 3 and 4]. A standardized assessment model is launched for all students in autumn 2017. This model is different from the one used so far in categories 1 and 2 allowing more flexibility to teacher groups, but it will force all the teacher groups to follow the general rules in order to improve the results and the student satisfaction further.Keywords: continuous assessment, course integration, curricular reform, student feedback
Procedia PDF Downloads 2049571 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1309570 Stress-Strain Relation for Human Trabecular Bone Based on Nanoindentation Measurements
Authors: Marek Pawlikowski, Krzysztof Jankowski, Konstanty Skalski, Anna Makuch
Abstract:
Nanoindentation or depth-sensing indentation (DSI) technique has proven to be very useful to measure mechanical properties of various tissues at a micro-scale. Bone tissue, both trabecular and cortical one, is one of the most commonly tested tissues by means of DSI. Most often such tests on bone samples are carried out to compare the mechanical properties of lamellar and interlamellar bone, osteonal bone as well as compact and cancellous bone. In the paper, a relation between stress and strain for human trabecular bone is presented. The relation is based on the results of nanoindentation tests. The formulation of a constitutive model for human trabecular bone is based on nanoindentation tests. In the study, the approach proposed by Olivier-Pharr is adapted. The tests were carried out on samples of trabecular tissue extracted from human femoral heads. The heads were harvested during surgeries of artificial hip joint implantation. Before samples preparation, the heads were kept in 95% alcohol in temperature 4 Celsius degrees. The cubic samples cut out of the heads were stored in the same conditions. The dimensions of the specimens were 25 mm x 25 mm x 20 mm. The number of 20 samples have been tested. The age range of donors was between 56 and 83 years old. The tests were conducted with the indenter spherical tip of the diameter 0.200 mm. The maximum load was P = 500 mN and the loading rate 500 mN/min. The data obtained from the DSI tests allows one only to determine bone behoviour in terms of nanoindentation force vs. nanoindentation depth. However, it is more interesting and useful to know the characteristics of trabecular bone in the stress-strain domain. This allows one to simulate trabecular bone behaviour in a more realistic way. The stress-strain curves obtained in the study show relation between the age and the mechanical behaviour of trabecular bone. It was also observed that the bone matrix of trabecular tissue indicates an ability of energy absorption.Keywords: constitutive model, mechanical behaviour, nanoindentation, trabecular bone
Procedia PDF Downloads 2239569 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms
Authors: Arpine Maghakyan
Abstract:
The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.Keywords: audit fees, auditor quality, digitalization, Big4
Procedia PDF Downloads 3039568 Allergenic Potential of Airborne Algae Isolated from Malaysia
Authors: Chu Wan-Loy, Kok Yih-Yih, Choong Siew-Ling
Abstract:
The human health risks due to poor air quality caused by a wide array of microorganisms have attracted much interest. Airborne algae have been reported as early as 19th century and they can be found in the air of tropic and warm atmospheres. Airborne algae normally originate from water surfaces, soil, trees, buildings and rock surfaces. It is estimated that at least 2880 algal cells are inhaled per day by human. However, there are relatively little data published on airborne algae and its related adverse health effects except sporadic reports of algae associated clinical allergenicity. A collection of airborne algae cultures has been established following a recent survey on the occurrence of airborne algae in indoor and outdoor environments in Kuala Lumpur. The aim of this study was to investigate the allergenic potential of the isolated airborne green and blue-green algae, namely Scenedesmus sp., Cylindrospermum sp. and Hapalosiphon sp.. The suspensions of freeze-dried airborne algae were adminstered into balb-c mice model through intra-nasal route to determine their allergenic potential. Results showed that Scenedesmus sp. (1 mg/mL) increased the systemic Ig E levels in mice by 3-8 fold compared to pre-treatment. On the other hand, Cylindrospermum sp. and Hapalosiphon sp. at similar concentration caused the Ig E to increase by 2-4 fold. The potential of airborne algae causing Ig E mediated type 1 hypersensitivity was elucidated using other immunological markers such as cytokine interleukin (IL)- 4, 5, 6 and interferon-ɣ. When we compared the amount of interleukins in mouse serum between day 0 and day 53 (day of sacrifice), Hapalosiphon sp. (1mg/mL) increased the expression of IL4 and 6 by 8 fold while the Cylindrospermum sp. (1mg/mL) increased the expression of IL4 and IFɣ by 8 and 2 fold respectively. In conclusion, repeated exposure to the three selected airborne algae may stimulate the immune response and generate Ig E in a mouse model.Keywords: airborne algae, respiratory, allergenic, immune response, Malaysia
Procedia PDF Downloads 2419567 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 609566 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID
Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis
Abstract:
Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.Keywords: artificial intelligence, COVID, neural network, machine learning
Procedia PDF Downloads 959565 Development of a CFD Model for PCM Based Energy Storage in a Vertical Triplex Tube Heat Exchanger
Authors: Pratibha Biswal, Suyash Morchhale, Anshuman Singh Yadav, Shubham Sanjay Chobe
Abstract:
Energy demands are increasing whereas energy sources, especially non-renewable sources are limited. Due to the intermittent nature of renewable energy sources, it has become the need of the hour to find new ways to store energy. Out of various energy storage methods, latent heat thermal storage devices are becoming popular due to their high energy density per unit mass and volume at nearly constant temperature. This work presents a computational fluid dynamics (CFD) model using ANSYS FLUENT 19.0 for energy storage characteristics of a phase change material (PCM) filled in a vertical triplex tube thermal energy storage system. A vertical triplex tube heat exchanger, just like its name consists of three concentric tubes (pipe sections) for parting the device into three fluid domains. The PCM is filled in the middle domain with heat transfer fluids flowing in the outer and innermost domains. To enhance the heat transfer inside the PCM, eight fins have been incorporated between the internal and external tubes. These fins run radially outwards from the outer-wall of innermost tube to the inner-wall of the middle tube dividing the middle domain (between innermost and middle tube) into eight sections. These eight sections are then filled with a PCM. The validation is carried with earlier work and a grid independence test is also presented. Further studies on freezing and melting process were carried out. The results are presented in terms of pictorial representation of isotherms and liquid fractionKeywords: heat exchanger, thermal energy storage, phase change material, CFD, latent heat
Procedia PDF Downloads 1549564 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions
Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead
Abstract:
In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence
Procedia PDF Downloads 1269563 Determine Causal Factors Affecting the Responsiveness and Productivity of Non-Governmental Universities
Authors: Davoud Maleki
Abstract:
Today, education and investment in human capital is a long-term investment without which the economy will be stagnant Stayed. Higher education represents a type of investment in human resources by providing and improving knowledge, skills and Attitudes help economic development. Providing efficient human resources by increasing the efficiency and productivity of people and on the other hand with Expanding the boundaries of knowledge and technology and promoting technology such as the responsibility of training human resources and increasing productivity and efficiency in High specialized levels are the responsibility of universities. Therefore, the university plays an infrastructural role in economic development and growth because education by creating skills and expertise in people and improving their ability.In recent decades, Iran's higher education system has been faced with many problems, therefore, scholars have looked for it is to identify and validate the causal factors affecting the responsiveness and productivity of non-governmental universities. The data in the qualitative part is the result of semi-structured interviews with 25 senior and middle managers working in the units It was Islamic Azad University of Tehran province, which was selected by theoretical sampling method. In data analysis, stepwise method and Analytical techniques of Strauss and Corbin (1992) were used. After determining the central category (answering for the sake of the beneficiaries) and using it in order to bring the categories, expressions and ideas that express the relationships between the main categories and In the end, six main categories were identified as causal factors affecting the university's responsiveness and productivity.They are: 1- Scientism 2- Human resources 3- Creating motivation in the university 4- Development based on needs assessment 5- Teaching process and Learning 6- University quality evaluation. In order to validate the response model obtained from the qualitative stage, a questionnaire The questionnaire was prepared and the answers of 146 students of Master's degree and Doctorate of Islamic Azad University located in Tehran province were received. Quantitative data in the form of descriptive data analysis, first and second stage factor analysis using SPSS and Amos23 software were analyzed. The findings of the research indicated the relationship between the central category and the causal factors affecting the response The results of the model test in the quantitative stage confirmed the generality of the conceptual model.Keywords: accountability, productivity, non-governmental, universities, foundation data theory
Procedia PDF Downloads 629562 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea
Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng
Abstract:
During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea
Procedia PDF Downloads 1749561 Influence of Infinite Elements in Vibration Analysis of High-Speed Railway Track
Authors: Janaki Rama Raju Patchamatla, Emani Pavan Kumar
Abstract:
The idea of increasing the existing train speeds and introduction of the high-speed trains in India as a part of Vision-2020 is really challenging from both economic viability and technical feasibility. More than economic viability, technical feasibility has to be thoroughly checked for safe operation and execution. Trains moving at high speeds need a well-established firm and safe track thoroughly tested against vibration effects. With increased speeds of trains, the track structure and layered soil-structure interaction have to be critically assessed for vibration and displacements. Physical establishment of track, testing and experimentation is a costly and time taking process. Software-based modelling and simulation give relatively reliable, cost-effective means of testing effects of critical parameters like sleeper design and density, properties of track and sub-grade, etc. The present paper reports the applicability of infinite elements in reducing the unrealistic stress-wave reflections from so-called soil-structure interface. The influence of the infinite elements is quantified in terms of the displacement time histories of adjoining soil and the deformation pattern in general. In addition, the railhead response histories at various locations show that the numerical model is realistic without any aberrations at the boundaries. The numerical model is quite promising in its ability to simulate the critical parameters of track design.Keywords: high speed railway track, finite element method, Infinite elements, vibration analysis, soil-structure interface
Procedia PDF Downloads 2729560 Accelerating Molecular Dynamics Simulations of Electrolytes with Neural Network: Bridging the Gap between Ab Initio Molecular Dynamics and Classical Molecular Dynamics
Authors: Po-Ting Chen, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
Classical molecular dynamics (CMD) simulations are highly efficient for material simulations but have limited accuracy. In contrast, ab initio molecular dynamics (AIMD) provides high precision by solving the Kohn–Sham equations yet requires significant computational resources, restricting the size of systems and time scales that can be simulated. To address these challenges, we employed NequIP, a machine learning model based on an E(3)-equivariant graph neural network, to accelerate molecular dynamics simulations of a 1M LiPF6 in EC/EMC (v/v 3:7) for Li battery applications. AIMD calculations were initially conducted using the Vienna Ab initio Simulation Package (VASP) to generate highly accurate atomic positions, forces, and energies. This data was then used to train the NequIP model, which efficiently learns from the provided data. NequIP achieved AIMD-level accuracy with significantly less training data. After training, NequIP was integrated into the LAMMPS software to enable molecular dynamics simulations of larger systems over longer time scales. This method overcomes the computational limitations of AIMD while improving the accuracy limitations of CMD, providing an efficient and precise computational framework. This study showcases NequIP’s applicability to electrolyte systems, particularly for simulating the dynamics of LiPF6 ionic mixtures. The results demonstrate substantial improvements in both computational efficiency and simulation accuracy, highlighting the potential of machine learning models to enhance molecular dynamics simulations.Keywords: lithium-ion batteries, electrolyte simulation, molecular dynamics, neural network
Procedia PDF Downloads 269559 Design of the Compliant Mechanism of a Biomechanical Assistive Device for the Knee
Authors: Kevin Giraldo, Juan A. Gallego, Uriel Zapata, Fanny L. Casado
Abstract:
Compliant mechanisms are designed to deform in a controlled manner in response to external forces, utilizing the flexibility of their components to store potential elastic energy during deformation, gradually releasing it upon returning to its original form. This article explores the design of a knee orthosis intended to assist users during stand-up motion. The orthosis makes use of a compliant mechanism to balance the user’s weight, thereby minimizing the strain on leg muscles during standup motion. The primary function of the compliant mechanism is to store and exchange potential energy, so when coupled with the gravitational potential of the user, the total potential energy variation is minimized. The design process for the semi-rigid knee orthosis involved material selection and the development of a numerical model for the compliant mechanism seen as a spring. Geometric properties are obtained through the numerical modeling of the spring once the desired stiffness and safety factor values have been attained. Subsequently, a 3D finite element analysis was conducted. The study demonstrates a strong correlation between the maximum stress in the mathematical model (250.22 MPa) and the simulation (239.8 MPa), with a 4.16% error. Both analyses safety factors: 1.02 for the mathematical approach and 1.1 for the simulation, with a consistent 7.84% margin of error. The spring’s stiffness, calculated at 90.82 Nm/rad analytically and 85.71 Nm/rad in the simulation, exhibits a 5.62% difference. These results suggest significant potential for the proposed device in assisting patients with knee orthopedic restrictions, contributing to ongoing efforts in advancing the understanding and treatment of knee osteoarthritis.Keywords: biomechanics, complaint mechanisms, gonarthrosis, orthoses
Procedia PDF Downloads 399558 Sexual Orientation, Household Labour Division and the Motherhood Wage Penalty
Authors: Julia Hoefer Martí
Abstract:
While research has consistently found a significant motherhood wage penalty for heterosexual women, where homosexual women are concerned, evidence has appeared to suggest no effect, or possibly even a wage bonus. This paper presents a model of the household with a public good that requires both a monetary expense and a labour investment, and where the household budget is shared between partners. Lower-wage partners will do relatively more of the household labour while higher-wage partners will specialise in market labour, and the arrival of a child exacerbates this split, resulting in the lower-wage partner taking on even more of the household labour in relative terms. Employers take this gender-sexuality dyad as a signal for employees’ commitment to the labour market after having a child, and use the information when setting wages after employees become parents. Given that women empirically earn lower wages than men, in a heterosexual couple the female partner will often do more of the household labour. However, as not every female partner has a lower wage, this results in an over-adjustment of wages that manifests as an unexplained motherhood wage penalty. On the other hand, in homosexual couples wage distributions are ex ante identical, and gender is no longer a useful signal to employers as to whether the partner is likely to specialise in household labour or market labour. This model is then tested using longitudinal data from the EU Standards of Income and Living Conditions (EU-SILC) to investigate the hypothesis that women experience different wage effects of motherhood depending on their sexual orientation. While heterosexual women receive a significant motherhood wage penalty of 8-10%, homosexual mothers do not receive any significant wage bonus or penalty of motherhood, consistent with the hypothesis presented above.Keywords: discrimination, gender, motherhood, sexual orientation, labor economics
Procedia PDF Downloads 1679557 Relationships between Emotion Regulation Strategies and Well-Being Outcomes among the Elderly and Their Caregivers: A Dyadic Modeling Approach
Authors: Sakkaphat T. Ngamake, Arunya Tuicomepee, Panrapee Suttiwan, Rewadee Watakakosol, Sompoch Iamsupasit
Abstract:
Generally, 'positive' emotion regulation strategies such as cognitive reappraisal have linked to desirable outcomes while 'negative' strategies such as behavioral suppression have linked to undesirable outcomes. These trends have been found in both the elderly and professional practitioners. Hence, this study sought to investigate these trends further by examining the relationship between two dominant emotion regulation strategies in the literature (i.e., cognitive reappraisal and behavioral suppression) and well-being outcomes among the elderly (i.e., successful aging) and their caregivers (i.e., satisfaction with life), using the actor-partner interdependence model. A total of 150 elderly-caregiver dyads participated in the study. The elderly responded to two measures assessing the two emotion regulation strategies and successful aging while their caregivers responded to the same emotion regulation measure and a measure of satisfaction with life. Two criterion variables (i.e., successful aging and satisfaction with life) were specified as latent variables whereas four predictors (i.e., two strategies for the elderly and two strategies for their caregivers) were specified as observed variables in the model. Results have shown that, for the actor effect, the cognitive reappraisal strategy yielded positive relationships with the well-being outcomes for both the elderly and their caregivers. For the partner effect, a positive relationship between caregivers’ cognitive reappraisal strategy and the elderly’s successful aging was observed. The behavioral suppression strategy has not related to any well-being outcomes, within and across individual agents. This study has contributed to the literature by empirically showing that the mental activity of the elderly’s immediate environment such as their family members or close friends could affect their quality of life.Keywords: emotion regulation, caregiver, older adult, well-being
Procedia PDF Downloads 4279556 Study Employed a Computer Model and Satellite Remote Sensing to Evaluate the Temporal and Spatial Distribution of Snow in the Western Hindu Kush Region of Afghanistan
Authors: Noori Shafiqullah
Abstract:
Millions of people reside downstream of river basins that heavily rely on snowmelt originating from the Hindu Kush (HK) region. Snowmelt plays a critical role as a primary water source in these areas. This study aimed to evaluate snowfall and snowmelt characteristics in the HK region across altitudes ranging from 2019m to 4533m. To achieve this, the study employed a combination of remote sensing techniques and the Snow Model (SM) to analyze the spatial and temporal distribution of Snow Water Equivalent (SWE). By integrating the simulated Snow-cover Area (SCA) with data from the Moderate Resolution Imaging Spectroradiometer (MODIS), the study optimized the Precipitation Gradient (PG), snowfall assessment, and the degree-day factor (DDF) for snowmelt distribution. Ground observed data from various elevations were used to calculate a temperature lapse rate of -7.0 (°C km-1). Consequently, the DDF value was determined as 3 (mm °C-1 d-1) for altitudes below 3000m and 3 to 4 (mm °C-1 d-1) for higher altitudes above 3000m. Moreover, the distribution of precipitation varies with elevation, with the PG being 0.001 (m-1) at lower elevations below 4000m and 0 (m-1) at higher elevations above 4000m. This study successfully utilized the SM to assess SCA and SWE by incorporating the two optimized parameters. The analysis of simulated SCA and MODIS data yielded coefficient determinations of R2, resulting in values of 0.95 and 0.97 for the years 2014-2015, 2015-2016, and 2016-2017, respectively. These results demonstrate that the SM is a valuable tool for managing water resources in mountainous watersheds such as the HK, where data scarcity poses a challenge."Keywords: improved MODIS, experiment, snow water equivalent, snowmelt
Procedia PDF Downloads 719555 Microstructure Evolution and Modelling of Shear Forming
Authors: Karla D. Vazquez-Valdez, Bradley P. Wynne
Abstract:
In the last decades manufacturing needs have been changing, leading to the study of manufacturing methods that were underdeveloped, such as incremental forming processes like shear forming. These processes use rotating tools in constant local contact with the workpiece, which is often also rotating, to generate shape. This means much lower loads to forge large parts and no need for expensive special tooling. Potential has already been established by demonstrating manufacture of high-value products, e.g., turbine and satellite parts, with high dimensional accuracy from difficult to manufacture materials. Thus, huge opportunities exist for these processes to replace the current method of manufacture for a range of high value components, e.g., eliminating lengthy machining, reducing material waste and process times; or the manufacture of a complicated shape without the development of expensive tooling. However, little is known about the exact deformation conditions during processing and why certain materials are better than others for shear forming, leading to a lot of trial and error before production. Three alloys were used for this study: Ti-54M, Jethete M154, and IN718. General Microscopy and Electron Backscatter Diffraction (EBSD) were used to measure strains and orientation maps during shear forming. A Design of Experiments (DOE) analysis was also made in order to understand the impact of process parameters in the properties of the final workpieces. Such information was the key to develop a reliable Finite Element Method (FEM) model that closely resembles the deformation paths of this process. Finally, the potential of these three materials to be shear spun was studied using the FEM model and their Forming Limit Diagram (FLD) which led to the development of a rough methodology for testing the shear spinnability of various metals.Keywords: shear forming, damage, principal strains, forming limit diagram
Procedia PDF Downloads 1659554 The Effect of Data Integration to the Smart City
Authors: Richard Byrne, Emma Mulliner
Abstract:
Smart cities are a vision for the future that is increasingly becoming a reality. While a key concept of the smart city is the ability to capture, communicate, and process data that has long been produced through day-to-day activities of the city, much of the assessment models in place neglect this fact to focus on ‘smartness’ concepts. Although it is true technology often provides the opportunity to capture and communicate data in more effective ways, there are also human processes involved that are just as important. The growing importance with regards to the use and ownership of data in society can be seen by all with companies such as Facebook and Google increasingly coming under the microscope, however, why is the same scrutiny not applied to cities? The research area is therefore of great importance to the future of our cities here and now, while the findings will be of just as great importance to our children in the future. This research aims to understand the influence data is having on organisations operating throughout the smart cities sector and employs a mixed-method research approach in order to best answer the following question: Would a data-based evaluation model for smart cities be more appropriate than a smart-based model in assessing the development of the smart city? A fully comprehensive literature review concluded that there was a requirement for a data-driven assessment model for smart cities. This was followed by a documentary analysis to understand the root source of data integration to the smart city. A content analysis of city data platforms enquired as to the alternative approaches employed by cities throughout the UK and draws on best practice from New York to compare and contrast. Grounded in theory, the research findings to this point formulated a qualitative analysis framework comprised of: the changing environment influenced by data, the value of data in the smart city, the data ecosystem of the smart city and organisational response to the data orientated environment. The framework was applied to analyse primary data collected through the form of interviews with both public and private organisations operating throughout the smart cities sector. The work to date represents the first stage of data collection that will be built upon by a quantitative research investigation into the feasibility of data network effects in the smart city. An analysis into the benefits of data interoperability supporting services to the smart city in the areas of health and transport will conclude the research to achieve the aim of inductively forming a framework that can be applied to future smart city policy. To conclude, the research recognises the influence of technological perspectives in the development of smart cities to date and highlights this as a challenge to introduce theory applied with a planning dimension. The primary researcher has utilised their experience working in the public sector throughout the investigation to reflect upon what is perceived as a gap in practice of where we are today, to where we need to be tomorrow.Keywords: data, planning, policy development, smart cities
Procedia PDF Downloads 3129553 Fluid-Structure Interaction Analysis of a Vertical Axis Wind Turbine Blade Made with Natural Fiber Based Composite Material
Authors: Ivan D. Ortega, Juan D. Castro, Alberto Pertuz, Manuel Martinez
Abstract:
One of the problems considered when scientists talk about climate change is the necessity of utilizing renewable sources of energy, on this category there are many approaches to the problem, one of them is wind energy and wind turbines whose designs have frequently changed along many years trying to achieve a better overall performance on different conditions. From that situation, we get the two main types known today: Vertical and Horizontal axis wind turbines, which have acronyms VAWT and HAWT, respectively. This research aims to understand how well suited a composite material, which is still in development, made with natural origin fibers is for its implementation on vertical axis wind turbines blades under certain wind loads. The study consisted on acquiring the mechanical properties of the materials to be used which where bactris guineenis, also known as pama de lata in Colombia, and adhesive that acts as the matrix which had not been previously studied to the point required for this project. Then, a simplified 3D model of the airfoil was developed and tested under some preliminary loads using finite element analysis (FEA), these loads were acquired in the Colombian Chicamocha Canyon. Afterwards, a more realistic pressure profile was obtained using computational fluid dynamics which took into account the 3D shape of the complete blade and its rotation. Finally, the blade model was subjected to the wind loads using what is known as one way fluidstructure interaction (FSI) and its behavior analyzed to draw conclusions. The observed overall results were positive since the material behaved fairly as expected. Data suggests the material would be really useful in this kind of applications in small to medium size turbines if it is given more attention and time to develop.Keywords: CFD, FEA, FSI, natural fiber, VAWT
Procedia PDF Downloads 2299552 Enhancing Inservice Education Training Effectiveness Using a Mobile Based E-Learning Model
Authors: Richard Patrick Kabuye
Abstract:
This study focuses on the addressing the enhancement of in-service training programs as a tool of transforming the existing traditional approaches of formal lectures/contact hours. This will be supported with a more versatile, robust, and remotely accessible means of mobile based e-learning, as a support tool for the traditional means. A combination of various factors in education and incorporation of the eLearning strategy proves to be a key factor in effective in-service education. Key factor needs to be factored in so as to maintain a credible co-existence of the programs, with the prevailing social, economic and political environments. Effective in-service education focuses on having immediate transformation of knowledge into practice for a good time period, active participation of attendees, enable before training planning, in training assessment and post training feedback training analysis which will yield knowledge to the trainers of the applicability of knowledge given out. All the above require a more robust approach to attain success in implementation. Incorporating mobile technology in eLearning will enable the above to be factored together in a more coherent manner, as it is evident that participants have to take time off their duties and attend to these training programs. Making it mobile, will save a lot of time since participants would be in position to follow certain modules while away from lecture rooms, get continuous program updates after completing the program, send feedback to instructors on knowledge gaps, and a wholly conclusive evaluation of the entire program on a learn as you work platform. This study will follow both qualitative and quantitative approaches in data collection, and this will be compounded incorporating a mobile eLearning application using Android.Keywords: in service, training, mobile, e- learning, model
Procedia PDF Downloads 2219551 Design, Development, and Implementation of the Pediatric Physical Therapy Senior Clinical Internship Telerehabilitation Program of de la Salle Medical and Health Sciences Institute: The Pandemic Impetus
Authors: Ma. Cecilia D. Licuan
Abstract:
The pandemic situation continues to affect the lives of many people, including children with disabilities and their families, globally, especially in developing countries like the Philippines. The operations of health programs, industries, and economic sectors, as well as academic training institutions, are still challenged in terms of operations and delivery of services. The academic community of the Physical Therapy program is not spared by this circumstance. The restriction posted by the quarantine policies nearly terminated the onsite delivery of training programs for the senior internship level, which challenged the academic institutions to implement flexible learning programs to ensure the continuity of the instructional and learning processes with full consideration of safety and compliance to health protocols. This study aimed to develop a benchmark model that can be used by tertiary-level health institutions in the implementation of the Pediatric Senior Clinical Internship Training Program using Telerehabilitation. It is a descriptive-qualitative paper that utilized documentary analysis and focused on explaining the design, development, and implementation processes used by De La Salle Medical and Health Sciences Institute – College of Rehabilitation Sciences (DLSMHSI-CRS) Physical Therapy Department in its Pediatric Cluster Senior Clinical Internship Training Program covering the pandemic years spanning from the academic year 2020- 2021 to present anchored on needs analysis based on documentary reviews. Results of the study yielded the determination of the Pediatric Telerehabilitation Model; declaration of developed training program outcomes and thrusts and content; explanation of the process integral to the training program’s pedagogy in implementation; and the evaluation procedures conducted for the program. Since the study did not involve human participants, ethical considerations on the use of documents for review were done upon the endorsement of the management of the DLSMHSI-CRS to conduct the study. This paper presents the big picture of how a tertiary-level health sciences institution in the Philippines embraced the senior clinical internship challenges through the operations of its telerehabilitation program. It specifically presents the design, development and implementation processes used by De La Salle Medical and Health Sciences Institute – College of Rehabilitation Sciences Physical Therapy Department in its Pediatric Cluster Senior Clinical Internship Training Program, which can serve as a benchmark model for other institutions as they continue to serve their stakeholders amidst the pandemic.Keywords: pediatric physical therapy, telerehabilitation, clinical internship, pandemic
Procedia PDF Downloads 1299550 Pyrolysis of Dursunbey Lignite and Pyrolysis Kinetics
Abstract:
In this study, pyrolysis characteristics of Dursunbey-Balıkesir lignite and its pyrolysis kinetics are examined. The pyrolysis experiments carried out at three different heating rates are performed by using thermogravimetric method. Kinetic parameters are calculated by Coats & Redfern kinetic model and the degree of pyrolysis process is determined for each of the heating rate.Keywords: lignite, thermogravimetric analysis, pyrolysis, kinetics
Procedia PDF Downloads 3679549 Kinematic Gait Analysis Is a Non-Invasive, More Objective and Earlier Measurement of Impairment in the Mdx Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K. Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is caused by an X linked mutation in the dystrophin gene; lack of dystrophin causes a progressive muscle necrosis which leads to a progressive decrease in mobility in those suffering from the disease. The MDX mouse, a mutant mouse model which displays a frank dystrophinopathy, is currently widely employed in pre clinical efficacy models for treatments and therapies aimed at DMD. In general the end-points examined within this model have been based on invasive histopathology of muscles and serum biochemical measures like measurement of serum creatine kinase (sCK). It is established that a “critical period” between 4 and 6 weeks exists in the MDX mouse when there is extensive muscle damage that is largely sub clinical but evident with sCK measurements and histopathological staining. However, a full characterization of the MDX model remains largely incomplete especially with respect to the ability to aggravate of the muscle damage beyond the critical period. The purpose of this study was to attempt to aggravate the muscle damage in the MDX mouse and to create a wider, more readily translatable and discernible, therapeutic window for the testing of potential therapies for DMD. The study consisted of subjecting 15 male mutant MDX mice and 15 male wild-type mice to an intense chronic exercise regime that consisted of bi-weekly (two times per week) treadmill sessions over a 12 month period. Each session was 30 minutes in duration and the treadmill speed was gradually built up to 14m/min for the entire session. Baseline plasma creatine kinase (pCK), treadmill training performance and locomotor activity were measured after the “critical period” at around 10 weeks of age and again at 14 weeks of age, 6 months, 9 months and 12 months of age. In addition, kinematic gait analysis was employed using a novel analysis algorithm in order to compare changes in gait and fine motor skills in diseased exercised MDX mice compared to exercised wild type mice and non exercised MDX mice. In addition, a morphological and metabolic profile (including lipid profile), from the muscles most severely affected, the gastrocnemius muscle and the tibialis anterior muscle, was also measured at the same time intervals. Results indicate that by aggravating or exacerbating the underlying muscle damage in the MDX mouse by exercise a more pronounced and severe phenotype in comes to light and this can be picked up earlier by kinematic gait analysis. A reduction in mobility as measured by open field is not apparent at younger ages nor during the critical period, but changes in gait are apparent in the mutant MDX mice. These gait changes coincide with pronounced morphological and metabolic changes by non-invasive anatomical MRI and proton spectroscopy (1H-MRS) we have reported elsewhere. Evidence of a progressive asymmetric pathology in imaging parameters as well as in the kinematic gait analysis was found. Taken together, the data show that chronic exercise regime exacerbates the muscle damage beyond the critical period and the ability to measure through non-invasive means are important factors to consider when performing preclinical efficacy studies in the MDX mouse.Keywords: Gait, muscular dystrophy, Kinematic analysis, neuromuscular disease
Procedia PDF Downloads 2779548 Study on Aerosol Behavior in Piping Assembly under Varying Flow Conditions
Authors: Anubhav Kumar Dwivedi, Arshad Khan, S. N. Tripathi, Manish Joshi, Gaurav Mishra, Dinesh Nath, Naveen Tiwari, B. K. Sapra
Abstract:
In a nuclear reactor accident scenario, a large number of fission products may release to the piping system of the primary heat transport. The released fission products, mostly in the form of the aerosol, get deposited on the inner surface of the piping system mainly due to gravitational settling and thermophoretic deposition. The removal processes in the complex piping system are controlled to a large extent by the thermal-hydraulic conditions like temperature, pressure, and flow rates. These parameters generally vary with time and therefore must be carefully monitored to predict the aerosol behavior in the piping system. The removal process of aerosol depends on the size of particles that determines how many particles get deposit or travel across the bends and reach to the other end of the piping system. The released aerosol gets deposited onto the inner surface of the piping system by various mechanisms like gravitational settling, Brownian diffusion, thermophoretic deposition, and by other deposition mechanisms. To quantify the correct estimate of deposition, the identification and understanding of the aforementioned deposition mechanisms are of great importance. These mechanisms are significantly affected by different flow and thermodynamic conditions. Thermophoresis also plays a significant role in particle deposition. In the present study, a series of experiments were performed in the piping system of the National Aerosol Test Facility (NATF), BARC using metal aerosols (zinc) in dry environments to study the spatial distribution of particles mass and number concentration, and their depletion due to various removal mechanisms in the piping system. The experiments were performed at two different carrier gas flow rates. The commercial CFD software FLUENT is used to determine the distribution of temperature, velocity, pressure, and turbulence quantities in the piping system. In addition to the in-built models for turbulence, heat transfer and flow in the commercial CFD code (FLUENT), a new sub-model PBM (population balance model) is used to describe the coagulation process and to compute the number concentration along with the size distribution at different sections of the piping. In the sub-model coagulation kernels are incorporated through user-defined function (UDF). The experimental results are compared with the CFD modeled results. It is found that most of the Zn particles (more than 35 %) deposit near the inlet of the plenum chamber and a low deposition is obtained in piping sections. The MMAD decreases along the length of the test assembly, which shows that large particles get deposited or removed in the course of flow, and only fine particles travel to the end of the piping system. The effect of a bend is also observed, and it is found that the relative loss in mass concentration at bends is more in case of a high flow rate. The simulation results show that the thermophoresis and depositional effects are more dominating for the small and larger sizes as compared to the intermediate particles size. Both SEM and XRD analysis of the collected samples show the samples are highly agglomerated non-spherical and composed mainly of ZnO. The coupled model framed in this work could be used as an important tool for predicting size distribution and concentration of some other aerosol released during a reactor accident scenario.Keywords: aerosol, CFD, deposition, coagulation
Procedia PDF Downloads 1459547 Improving Patient-Care Services at an Oncology Center with a Flexible Adaptive Scheduling Procedure
Authors: P. Hooshangitabrizi, I. Contreras, N. Bhuiyan
Abstract:
This work presents an online scheduling problem which accommodates multiple requests of patients for chemotherapy treatments in a cancer center of a major metropolitan hospital in Canada. To solve the problem, an adaptive flexible approach is proposed which systematically combines two optimization models. The first model is intended to dynamically schedule arriving requests in the form of waiting lists whereas the second model is used to reschedule the already booked patients with the goal of finding better resource allocations when new information becomes available. Both models are created as mixed integer programming formulations. Various controllable and flexible parameters such as deviating the prescribed target dates by a pre-determined threshold, changing the start time of already booked appointments and the maximum number of appointments to move in the schedule are included in the proposed approach to have sufficient degrees of flexibility in handling arrival requests and unexpected changes. Several computational experiments are conducted to evaluate the performance of the proposed approach using historical data provided by the oncology clinic. Our approach achieves outstandingly better results as compared to those of the scheduling system being used in practice. Moreover, several analyses are conducted to evaluate the effect of considering different levels of flexibility on the obtained results and to assess the performance of the proposed approach in dealing with last-minute changes. We strongly believe that the proposed flexible adaptive approach is very well-suited for implementation at the clinic to provide better patient-care services and to utilize available resource more efficiently.Keywords: chemotherapy scheduling, multi-appointment modeling, optimization of resources, satisfaction of patients, mixed integer programming
Procedia PDF Downloads 171