Search results for: risk banking technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13384

Search results for: risk banking technology

934 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 265
933 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 364
932 Chronic Impact of Silver Nanoparticle on Aerobic Wastewater Biofilm

Authors: Sanaz Alizadeh, Yves Comeau, Arshath Abdul Rahim, Sunhasis Ghoshal

Abstract:

The application of silver nanoparticles (AgNPs) in personal care products, various household and industrial products has resulted in an inevitable environmental exposure of such engineered nanoparticles (ENPs). Ag ENPs, released via household and industrial wastes, reach water resource recovery facilities (WRRFs), yet the fate and transport of ENPs in WRRFs and their potential risk in the biological wastewater processes are poorly understood. Accordingly, our main objective was to elucidate the impact of long-term continuous exposure to AgNPs on biological activity of aerobic wastewater biofilm. The fate, transport and toxicity of 10 μg.L-1and 100 μg.L-1 PVP-stabilized AgNPs (50 nm) were evaluated in an attached growth biological treatment process, using lab-scale moving bed bioreactors (MBBRs). Two MBBR systems for organic matter removal were fed with a synthetic influent and operated at a hydraulic retention time (HRT) of 180 min and 60% volumetric filling ratio of Anox-K5 carriers with specific surface area of 800 m2/m3. Both reactors were operated for 85 days after reaching steady state conditions to develop a mature biofilm. The impact of AgNPs on the biological performance of the MBBRs was characterized over a period of 64 days in terms of the filtered biodegradable COD (SCOD) removal efficiency, the biofilm viability and key enzymatic activities (α-glucosidase and protease). The AgNPs were quantitatively characterized using single-particle inductively coupled plasma mass spectroscopy (spICP-MS), determining simultaneously the particle size distribution, particle concentration and dissolved silver content in influent, bioreactor and effluent samples. The generation of reactive oxygen species and the oxidative stress were assessed as the proposed toxicity mechanism of AgNPs. Results indicated that a low concentration of AgNPs (10 μg.L-1) did not significantly affect the SCOD removal efficiency whereas a significant reduction in treatment efficiency (37%) was observed at 100 μg.L-1AgNPs. Neither the viability nor the enzymatic activities of biofilm were affected at 10 μg.L-1AgNPs but a higher concentration of AgNPs induced cell membrane integrity damage resulting in 31% loss of viability and reduced α-glucosidase and protease enzymatic activities by 31% and 29%, respectively, over the 64-day exposure period. The elevated intercellular ROS in biofilm at a higher AgNPs concentration over time was consistent with a reduced biological biofilm performance, confirming the occurrence of a nanoparticle-induced oxidative stress in the heterotrophic biofilm. The spICP-MS analysis demonstrated a decrease in the nanoparticles concentration over the first 25 days, indicating a significant partitioning of AgNPs into the biofilm matrix in both reactors. The concentration of nanoparticles increased in effluent of both reactors after 25 days, however, indicating a decreased retention capacity of AgNPs in biofilm. The observed significant detachment of biofilm also contributed to a higher release of nanoparticles due to cell-wall destabilizing properties of AgNPs as an antimicrobial agent. The removal efficiency of PVP-AgNPs and the biofilm biological responses were a function of nanoparticle concentration and exposure time. This study contributes to a better understanding of the fate and behavior of AgNPs in biological wastewater processes, providing key information that can be used to predict the environmental risks of ENPs in aquatic ecosystems.

Keywords: biofilm, silver nanoparticle, single particle ICP-MS, toxicity, wastewater

Procedia PDF Downloads 266
931 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 137
930 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 58
929 Impact of Pedagogical Techniques on the Teaching of Sports Sciences

Authors: Muhammad Saleem

Abstract:

Background: The teaching of sports sciences encompasses a broad spectrum of disciplines, including biomechanics, physiology, psychology, and coaching. Effective pedagogical techniques are crucial in imparting both theoretical knowledge and practical skills necessary for students to excel in the field. The impact of these techniques on students’ learning outcomes, engagement, and professional preparedness remains a vital area of study. Objective: This study aims to evaluate the effectiveness of various pedagogical techniques used in the teaching of sports sciences. It seeks to identify which methods most significantly enhance student learning, retention, engagement, and practical application of knowledge. Methods: A mixed-methods approach was employed, including both quantitative and qualitative analyses. The study involved a comparative analysis of traditional lecture-based teaching, experiential learning, problem-based learning (PBL), and technology-enhanced learning (TEL). Data were collected through surveys, interviews, and academic performance assessments from students enrolled in sports sciences programs at multiple universities. Statistical analysis was used to evaluate academic performance, while thematic analysis was applied to qualitative data to capture student experiences and perceptions. Results: The findings indicate that experiential learning and PBL significantly improve students' understanding and retention of complex sports science concepts compared to traditional lectures. TEL was found to enhance engagement and provide students with flexible learning opportunities, but its impact on deep learning varied depending on the quality of the digital resources. Overall, a combination of experiential learning, PBL, and TEL was identified as the most effective pedagogical approach, leading to higher student satisfaction and better preparedness for real-world applications. Conclusion: The study underscores the importance of adopting diverse and student-centered pedagogical techniques in the teaching of sports sciences. While traditional lectures remain useful for foundational knowledge, integrating experiential learning, PBL, and TEL can substantially improve student outcomes. These findings suggest that educators should consider a blended approach to pedagogy to maximize the effectiveness of sports science education.

Keywords: sport sciences, pedagogical techniques, health and physical education, problem-based learning, student engagement

Procedia PDF Downloads 16
928 Evaluation of the Physico-Chemical and Microbial Properties of the Compost Leachate (CL) to Assess Its Role in the Bioremediation of Polyaromatic Hydrocarbons (PAHs)

Authors: Omaima A. Sharaf, Tarek A. Moussa, Said M. Badr El-Din, H. Moawad

Abstract:

Background: Polycyclic aromatic hydrocarbons (PAHs) pose great environmental and human health concerns for their widespread occurrence, persistence, and carcinogenic properties. PAHs releases due to anthropogenic activities to the wider environment have led to higher concentrations of these contaminants than would be expected from natural processes alone. This may result in a wide range of environmental problems that can accumulate in agricultural ecosystems, which threatened to become a negative impact on sustainable agricultural development. Thus, this study aimed to evaluate the physico-chemical, and microbial properties of the compost leachate (CL) to assess its role as nutrient and microbial source (biostimulation/bioaugmentation) for developing a cost-effective bioremediation technology for PAHs contaminated sites. Material and Methods: PAHs-degrading bacteria were isolated from CL that was collected from a composting site located in central Scotland, UK. Isolation was carried out by enrichment using phenanthrene (PHR), pyrene (PYR) and benzo(a)pyrene (BaP) as the sole source of carbon and energy. The isolates were characterized using a variety of phenotypic and molecular properties. Six different isolates were identified based on the difference in morphological and biochemical tests. The efficiency of these isolates in PAHs utilization was assessed. Further analysis was performed to define taxonomical status and phylogenic relation between the most potent PAHs-utilizing bacterial strains and other standard strains, using molecular approach by partial 16S rDNA gene sequence analysis. Results indicated that the 16S rDNA sequence analysis confirmed the results of biochemical identification, as both of biochemical and molecular identification of the isolates assigned them to Bacillus licheniformis, Pseudomonas aeruginosa, Alcaligenes faecalis, Serratia marcescens, Enterobacter cloacae and Providenicia which were identified as the prominent PAHs-utilizers isolated from CL. Conclusion: This study indicates that the CL samples contain a diverse population of PAHs-degrading bacteria and the use of CL may have a potential for bioremediation of PAHs contaminated sites.

Keywords: polycyclic aromatic hydrocarbons, physico-chemical analyses, compost leachate, microbial and biochemical analyses, phylogenic relations, 16S rDNA sequence analysis

Procedia PDF Downloads 260
927 Effect of Different Nitrogen Level on Vegetative Growth of Maize Variety (Zea Mays)

Authors: Tegene Nigussie

Abstract:

Introduction: Maize is the most domesticated of all the field crops. Wild maize has not been found to date and there has been much speculation on its origin. Regardless of the validity of different theories, it is generally agreed that the center of origin of maize is Central America, primarily Mexico and the Caribbean. Maize in Africa is of a recent introduction although data suggest that it was present in Nigeria even before Columbus voyages. After being taken to Europe in 1493, maize was introduced to Africa and distributed through the continent by different routes. Maize is an important cereal crop in Ethiopia. In general, it is the primarily stable food, and rural households show a strong preference. For human food, the important constituents of grain are carbohydrates (starch and sugars), protein, fat or oil (in the embryo) and minerals. About 75 percent of the kernel is starch, a range of 60.80 percent, but low protein content (8-15). In Ethiopia, the introduction of modern farming techniques appears to be a priority. However, the adoption of modern inputs by peasant farmers is found to be very slow; for example, the adoption rate of fertilizer, an input that is relatively adopted, is very slow. The difference socio economic factors lied behind the low rate of technology adoption, including price &marketing input. Objective: The objective of this study is to determine the optimum application rate or level of different nitrogen fertilizers for the vegetative growth of maize and to identify the effect of different nitrogen rates on the growth and development of maize. Methods: The vegetative parameter (above ground) measurement from five plants randomly sampled from the middle rows of each plot. Results: The interaction of nitrogen and maize variety showed a significant at (p<0.01) effect on plant height, with the application of 60kg/ha and BH140 maize variety in combination and root length with the application of 60kg/ha of nitrogen and BH140 variety of maize. The highest mean (12.33) of the number of leaves per plant and mean (7.1) of the number of nodes per plant can be used as an alternative for better vegetative growth of maize. Conclusion: Maize is one of the most popular and cultivated crops in Ethiopia. The study was conducted to investigate the best dosage of nitrogen for vegetative growth, yield, and better quality of maize variety and to recommend a level of nitrogen rate and the best variety adaptable to the specific soil condition or area.

Keywords: parameter, chlorosis, germination, flood, sesbania, cultivar

Procedia PDF Downloads 20
926 What Are the Problems in the Case of Analysis of Selenium by Inductively Coupled Plasma Mass Spectrometry in Food and Food Raw Materials?

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Dávid Andrási

Abstract:

For analysis of elements in different food, feed and food raw material samples generally a flame atomic absorption spectrometer (FAAS), a graphite furnace atomic absorption spectrometer (GF-AAS), an inductively coupled plasma optical emission spectrometer (ICP-OES) and an inductively coupled plasma mass spectrometer (ICP-MS) are applied. All the analytical instruments have different physical and chemical interfering effects analysing food and food raw material samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays, it is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium (arsenic, germanium, vanadium, and chromium). To elaborate an analytical method for selenium with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) isobaric elemental, 2) isobaric molecular, and 3) physical interferences. Analysing food and food raw material samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food, feed and food raw material samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of selenium. So finally we could find “opportunities” to decrease the error of selenium analysis. To analyse selenium in food, feed and food raw material samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of Se, which can be corrected using internal standard (arsenic or tellurium).

Keywords: selenium, ICP-MS, food, food raw material

Procedia PDF Downloads 503
925 Moodle-Based E-Learning Course Development for Medical Interpreters

Authors: Naoko Ono, Junko Kato

Abstract:

According to the Ministry of Justice, 9,044,000 foreigners visited Japan in 2010. The number of foreign residents in Japan was over 2,134,000 at the end of 2010. Further, medical tourism has emerged as a new area of business. Against this background, language barriers put the health of foreigners in Japan at risk, because they have difficulty in accessing health care and communicating with medical professionals. Medical interpreting training is urgently needed in response to language problems resulting from the rapid increase in the number of foreign workers in Japan over recent decades. Especially, there is a growing need in medical settings in Japan to speak international languages for communication, with Tokyo selected as the host city of the 2020 Summer Olympics. Due to the limited number of practical activities on medical interpreting, it is difficult for learners to acquire the interpreting skills. In order to eliminate the shortcoming, a web-based English-Japanese medical interpreting training system was developed. We conducted a literature review to identify learning contents, core competencies for medical interpreters by using Pubmed, PsycINFO, Cochrane Library, and Google Scholar. Selected papers were investigated to find core competencies in medical interpreting. Eleven papers were selected through literature review indicating core competencies for medical interpreters. Core competencies in medical interpreting abstracted from the literature review, showed consistency in previous research whilst the content of the programs varied in domestic and international training programs for medical interpreters. Results of the systematic review indicated five core competencies: (a) maintaining accuracy and completeness; (b) medical terminology and understanding the human body; (c) behaving ethically and making ethical decisions; (d) nonverbal communication skills; and (e) cross-cultural communication skills. We developed an e-leaning program for training medical interpreters. A Web-based Medical Interpreter Training Program which cover these competencies was developed. The program included the following : online word list (Quizlet), allowing student to study online and on their smartphones; self-study tool (Quizlet) for help with dictation and spelling; word quiz (Quizlet); test-generating system (Quizlet); Interactive body game (BBC);Online resource for understanding code of ethics in medical interpreting; Webinar about non-verbal communication; and Webinar about incompetent vs. competent cultural care. The design of a virtual environment allows the execution of complementary experimental exercises for learners of medical interpreting and introduction to theoretical background of medical interpreting. Since this system adopts a self-learning style, it might improve the time and lack of teaching material restrictions of the classroom method. In addition, as a teaching aid, virtual medical interpreting is a powerful resource for the understanding how actual medical interpreting can be carried out. The developed e-learning system allows remote access, enabling students to perform experiments at their own place, without being physically in the actual laboratory. The web-based virtual environment empowers students by granting them access to laboratories during their free time. A practical example will be presented in order to show capabilities of the system. The developed web-based training program for medical interpreters could bridge the gap between medical professionals and patients with limited English proficiency.

Keywords: e-learning, language education, moodle, medical interpreting

Procedia PDF Downloads 357
924 Ultrasonic Micro Injection Molding: Manufacturing of Micro Plates of Biomaterials

Authors: Ariadna Manresa, Ines Ferrer

Abstract:

Introduction: Ultrasonic moulding process (USM) is a recent injection technology used to manufacture micro components. It is able to melt small amounts of material so the waste of material is certainly reduced comparing to microinjection molding. This is an important advantage when the materials are expensive like medical biopolymers. Micro-scaled components are involved in a variety of uses, such as biomedical applications. It is required replication fidelity so it is important to stabilize the process and minimize the variability of the responses. The aim of this research is to investigate the influence of the main process parameters on the filling behaviour, the dimensional accuracy and the cavity pressure when a micro-plate is manufactured by biomaterials such as PLA and PCL. Methodology or Experimental Procedure: The specimens are manufactured using a Sonorus 1G Ultrasound Micro Molding Machine. The used geometry is a rectangular micro-plate of 15x5mm and 1mm of thickness. The materials used for the investigation are PLA and PCL due to biocompatible and degradation properties. The experimentation is divided into two phases. Firstly, the influence of process parameters (vibration amplitude, sonotrodo velocity, ultrasound time and compaction force) on filling behavior is analysed, in Phase 1. Next, when filling cavity is assured, the influence of both cooling time and force compaction on the cavity pressure, part temperature and dimensional accuracy is instigated, which is done in Phase. Results and Discussion: Filling behavior depends on sonotrodo velocity and vibration amplitude. When the ultrasonic time is higher, more ultrasonic energy is applied and the polymer temperature increases. Depending on the cooling time, it is possible that when mold is opened, the micro-plate temperature is too warm. Consequently, the polymer relieve its stored internal energy (ultrasonic and thermal) expanding through the easier direction. This fact is reflected on dimensional accuracy, causing micro-plates thicker than the mold. It has also been observed the most important fact that affects cavity pressure is the compaction configuration during the manufacturing cycle. Conclusions: This research demonstrated the influence of process parameters on the final micro-plated manufactured. Future works will be focused in manufacturing other geometries and analysing the mechanical properties of the specimens.

Keywords: biomaterial, biopolymer, micro injection molding, ultrasound

Procedia PDF Downloads 279
923 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 102
922 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 134
921 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler

Authors: Tejinder Singh Sidhu

Abstract:

Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.

Keywords: hot corrosion, coating, HVOF, oxidation

Procedia PDF Downloads 77
920 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand

Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn

Abstract:

Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.

Keywords: recurrent drug, adverse reaction, safety, use of medicine

Procedia PDF Downloads 451
919 Adapting Liability in the Era of Automated Decision-Making: A South African Labour Law Perspective

Authors: Aisha Adam

Abstract:

This study critically examines the transformative impact of automated decision-making (ADM) and artificial intelligence (AI) systems on South African labour law. As AI technologies increasingly infiltrate workplaces, existing liability frameworks face challenges in addressing the unique complexities presented by these innovations. This article explores the necessity of redefining liability to accommodate the nuanced landscape of ADM and AI within South African labour law. It emphasises the importance of ensuring responsible deployment and safeguarding the rights of workers amid evolving technological dynamics. This research investigates the central concern of fairness, bias, and discrimination in ADM and AI decision-making. Focusing on algorithmic bias and discriminatory outcomes, the paper advocates for the integration of mechanisms within the South African legal framework, particularly under the Promotion of Equality and Prevention of Unfair Discrimination Act (PEPUDA) and the Employment Equity Act (EEA). The study scrutinises the shifting dynamics of the employment relationship, calling for clear guidelines on the responsibilities and liabilities of employers, employees, and technology providers. Furthermore, the article analyses legal and policy responses to ADM and AI within South African labour law, exploring potential amendments to legislation, guidelines, and codes of practice. It assesses the role of regulatory bodies, specifically the Commission for Conciliation, Mediation, and Arbitration (CCMA), in overseeing and enforcing responsible practices in the workplace. Lastly, the research evaluates the impact of ADM and AI on human and social rights in the South African context. Emphasising the protection of constitutional rights, including fair labour practices, privacy, and equality, the study proposes remedies and safeguards. It advocates for a multidisciplinary approach involving legal, technological, and ethical considerations to redefine liability in South African labour law effectively. The article contends that a shift from accountability to responsibility is crucial for promoting fairness, antidiscrimination, and the protection of human and social rights in the age of automated decision-making. It calls for collaborative efforts among stakeholders to shape responsible practices and redefine liability in this evolving technological landscape.

Keywords: automated decision-making, artificial intelligence, labour law, vicarious liability

Procedia PDF Downloads 76
918 Professional Development in EFL Classroom: Motivation and Reflection

Authors: Iman Jabbar

Abstract:

Within the scope of professionalism and in order to compete with the modern world, teachers, are expected to develop their teaching skills and activities in addition to their professional knowledge. At the college level, the teacher should be able to face classroom challenges through his engagement with the learning situation to understand the students and their needs. In our field of TESOL, the role of the English teacher is no longer restricted to teaching English texts, but rather he should endeavor to enhance the students’ skills such as communication and critical analysis. Within the literature of professionalism, there are certain strategies and tools that an English teacher should adopt to develop his competence and performance. Reflective practice, which is an exploratory process, is one of these strategies. Another strategy contributing to classroom development is motivation. It is crucial in students’ learning as it affects the quality of learning English in the classroom in addition to determining success or failure as well as language achievement. This is a qualitative study grounded on interpretive perspectives of teachers and students regarding the process of professional development. This study aims at (a) understanding how teachers at the college level conceptualize reflective practice and motivation inside EFL classroom, and (b) exploring the methods and strategies that they implement to practice reflection and motivation. This study and is based on two questions: 1. How do EFL teachers perceive and view reflection and motivation in relation to their teaching and professional development? 2. How can reflective practice and motivation be developed into practical strategies and actions in EFL teachers’ professional context? The study is organized into two parts, theoretical and practical. The theoretical part reviews the literature on the concept of reflective practice and motivation in relation to professional development through providing certain definitions, theoretical models, and strategies. The practical part draws on the theoretical one, however; it is the core of the study since it deals with two issues. It involves the research design, methodology, and methods of data collection, sampling, and data analysis. It ends up with an overall discussion of findings and the researcher's reflections on the investigated topic. In terms of significance, the study is intended to contribute to the field of TESOL at the academic level through the selection of the topic and investigating it from theoretical and practical perspectives. Professional development is the path that leads to enhancing the quality of teaching English as a foreign or second language in a way that suits the modern trends of globalization and advanced technology.

Keywords: professional development, motivation, reflection, learning

Procedia PDF Downloads 441
917 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems

Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana

Abstract:

Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.

Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP

Procedia PDF Downloads 191
916 STEAM and Project-Based Learning: Equipping Young Women with 21st Century Skills

Authors: Sonia Saddiqui, Maya Marcus

Abstract:

UTS STEAMpunk Girls is an educational program for young women (aged 12-16), to empower them to be more informed and active members of the 21st century workforce. With the number of STEM graduates on the decline, especially among young women, an additional aim of the program is to trial a STEAM (Science, Technology, Engineering, Arts/Humanities/Social Sciences, Mathematics), inter-disciplinary approach to improving STEM engagement. In-line with UNESCO’s recent focus on promoting ‘transversal competencies’ in future graduates, the program utilised co-design, project-based learning, entrepreneurial processes, and inter-disciplinary learning. The program consists of two phases. Taking a participatory design approach, the first phase (co-design workshops) provided valuable insight into student perspectives around engaging young women in STEM and inter-disciplinary thinking. The workshops positioned 26 young women from three schools as subject matter experts (SMEs), providing a platform for them to share their opinions, experiences and findings around the STEAM disciplines. The second (pilot) phase put the co-design phase findings into practice, with 64 students from four schools working in groups to articulate problems with real-world implications, and utilising design-thinking to solve them. The pilot phase utilised project-based learning to engage young women in entrepreneurial and STEAM frameworks and processes. Scalable program design and educational resources were trialed to determine appropriate mechanisms for engaging young women in STEM and in STEAM thinking. Across both phases, data was collected via longitudinal surveys to obtain pre-program, baseline attitudinal information, and compare that against post-program responses. Preliminary findings revealed students’ improved understanding of the STEM disciplines, industries and professions, improved awareness of STEAM as a concept, and improved understanding regarding inter-disciplinary and design thinking. Program outcomes will be of interest to high-school educators in both STEM and the Arts, Humanities and Social Sciences fields, and will hopefully inform future programmatic approaches to introducing inter-disciplinary STEAM learning in STEM curriculum.

Keywords: co-design, STEM, STEAM, project-based learning, inter-disciplinary

Procedia PDF Downloads 194
915 Preparation and CO2 Permeation Properties of Carbonate-Ceramic Dual-Phase Membranes

Authors: H. Ishii, S. Araki, H. Yamamoto

Abstract:

In recent years, the carbon dioxide (CO2) separation technology is required in terms of the reduction of emission of global warming gases and the efficient use of fossil fuels. Since the emission amount of CO2 gas occupies the large part of greenhouse effect gases, it is considered that CO2 have the most influence on global warming. Therefore, we need to establish the CO2 separation technologies with high efficiency at low cost. In this study, we focused on the membrane separation compared with conventional separation technique such as distillation or cryogenic separation. In this study, we prepared carbonate-ceramic dual-phase membranes to separate CO2 at high temperature. As porous ceramic substrate, the (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+σ, La0.6Sr0.4Ti0.3 Fe0.7O3 and Ca0.8Sr0.2Ti0.7Fe0.3O3-α (PLNCG, LSTF and CSTF) were examined. PLNCG, LSTF and CSTF have the perovskite structure. The perovskite structure has high stability and shows ion-conducting doped by another metal ion. PLNCG, LSTF and CSTF have perovskite structure and has high stability and high oxygen ion diffusivity. PLNCG, LSTF and CSTF powders were prepared by a solid-phase process using the appropriate carbonates or oxides. To prepare porous substrates, these powders mixed with carbon black (20 wt%) and a few drops of polyvinyl alcohol (5 wt%) aqueous solution. The powder mixture were packed into stainless steel mold (13 mm) and uniaxially pressed into disk shape under a pressure of 20 MPa for 1 minute. PLNCG, LSTF and CSTF disks were calcined in air for 6 h at 1473, 1573 and 1473 K, respectively. The carbonate mixture (Li2CO3/Na2CO3/K2CO3: 42.5/32.5/25 in mole percent ratio) was placed inside a crucible and heated to 793 K. Porous substrates were infiltrated with the molten carbonate mixture at 793 K. Crystalline structures of the fresh membranes and after the infiltration with the molten carbonate mixtures were determined by X-ray diffraction (XRD) measurement. We confirmed the crystal structure of PLNCG and CSTF slightly changed after infiltration with the molten carbonate mixture. CO2 permeation experiments with PLNCG-carbonate, LSTF-carbonate and CSTF-carbonate membranes were carried out at 773-1173 K. The gas mixture of CO2 (20 mol%) and He was introduced at the flow rate of 50 ml/min to one side of membrane. The permeated CO2 was swept by N2 (50 ml/min). We confirmed the effect of ceramic materials and temperature on the CO2 permeation at high temperature.

Keywords: membrane, perovskite structure, dual-phase, carbonate

Procedia PDF Downloads 363
914 Architectural Identity in Manifestation of Tall-buildings' Design

Authors: Huda Arshadlamphon

Abstract:

Advancing frontiers of technology and industry is moving rapidly fast influenced by the economic and political phenomena. One vital phenomenon,which has had consolidated the world to a one single village, is Globalization. In response, architecture and the built-environment have faced numerous changes, adjustments, and developments. Tall-buildings, as a product of globalization, represent prestigious icons, symbols, and landmarks for highly economics and advanced countries. Despite the fact, this trend has been encountering several design challenges incorporating architectural identity, traditions, and characteristics that enhance the built-environments' sociocultural values and traditions. The necessity of these values and traditionsform self-solitarily, leading to visual and spatial creativity, independency, and individuality. In other words, they maintain the inherited identity and avoid replications in all means and aspects. This paper, firstly, defines globalization phenomenon, architectural identity, and the concerns of sociocultural values in relation to the traditional characteristics of the built-environment. Secondly, through three case-studies of tall-buildings located in Jeddah city, Saudi Arabia, the Queen's Building, the National Commercial Bank Building (NCB), and the Islamic Development Bank Building; design strategies and methodologies in acclimating architectural identity and characteristics in tall-buildings are discussed. The case-studies highlight buildings' sites and surroundings, concepts and inspirations, design elements, architectural forms and compositions, characteristics, issues, barriers, and trammels facing the designs' decisions, representation of facades, and selection of materials and colors. Furthermore, the research will elucidate briefs of the dominant factors that shape the architectural identity of Jeddah city. In conclusion, the study manifests four tall-buildings' design standards guideline in preserving and developing architectural identity in Jeddah city; the scale of urban and natural environment, the scale of architectural design elements, the integration of visual images, and the creation of spatial scenes and scenarios. The prosed guideline will encourage the development of architectural identity aligned with zeitgeist demands and requirements, supports the contemporary architectural movement toward tall-buildings, and shoresself-solitarily in representing sociocultural values and traditions of the built-environment.

Keywords: architectural identity, built-environment, globalization, sociocultural values and traditions, tall-buildings

Procedia PDF Downloads 159
913 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 131
912 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints

Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno

Abstract:

Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.

Keywords: battery energy storage, power system stability, system strength, weak power system

Procedia PDF Downloads 58
911 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 70
910 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy

Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright

Abstract:

The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.

Keywords: information entropy, communication in manufacturing, mass customisation, scheduling

Procedia PDF Downloads 241
909 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer

Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo

Abstract:

Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.

Keywords: boom, knapsack, farm, sprayer, wheel axle

Procedia PDF Downloads 281
908 Micro Plasma an Emerging Technology to Eradicate Pesticides from Food Surface

Authors: Muhammad Saiful Islam Khan, Yun Ji Kim

Abstract:

Organophosphorus pesticides (OPPs) have been widely used to replace more persistent organochlorine pesticides because OPPs are more soluble in water and decompose rapidly in aquatic systems. Extensive uses of OPPs in modern agriculture are the major cause of the contamination of surface water. Regardless of the advantages gained by the application of pesticides in modern agriculture, they are a threat to the public health environment. With the aim of reducing possible health threats, several physical and chemical treatment processes have been studied to eliminate biological and chemical poisons from food stuff. In the present study, a micro-plasma device was used to reduce pesticides from the surface of food stuff. Pesticide free food items chosen in this study were perilla leaf, tomato, broccoli and blueberry. To evaluate the removal efficiency of pesticides, different washing methods were followed such as soaking with water, washing with bubbling water, washing with plasma-treated water and washing with chlorine water. 2 mL of 2000 ppm pesticide samples, namely, diazinone and chlorpyrifos were individuality inoculated on food surface and was air dried for 2 hours before treated with plasma. Plasma treated water was used in two different manners one is plasma treated water with bubbling the other one is aerosolized plasma treated water. The removal efficiency of pesticides from food surface was studied using HPLC. Washing with plasma treated water, aerosolized plasma treated water and chlorine water shows minimum 72% to maximum 87 % reduction for 4 min treatment irrespective to the types of food items and the types of pesticides sample, in case of soaking and bubbling the reduction is 8% to 48%. Washing with plasma treated water, aerosolized plasma treated water and chlorine water shows somewhat similar reduction ability which is significantly higher comparing to the soaking and bubbling washing system. The temperature effect of the washing systems was also evaluated; three different temperatures were set for the experiment, such as 22°C, 10°C and 4°C. Decreasing temperature from 22°C to 10°C shows a higher reduction in the case of washing with plasma and aerosolized plasma treated water, whereas an opposite trend was observed for the washing with chlorine water. Further temperature reduction from 10°C to 4°C does not show any significant reduction of pesticides, except for the washing with chlorine water. Chlorine water treatment shows lesser pesticide reduction with the decrease in temperature. The color changes of the treated sample were measured immediately and after one week to evaluate if there is any effect of washing with plasma treated water and with chlorine water. No significant color changes were observed for either of the washing systems, except for broccoli washing with chlorine water.

Keywords: chlorpyrifos, diazinone, pesticides, micro plasma

Procedia PDF Downloads 181
907 A Report on the Elearning Programme of the Irish College of General Practitioners Which Can Address Continuing Education Needs of Primary Care Physicians

Authors: Nicholas P. Fenlon, Aisling Lavelle, David Mclean, Margaret O'riordan

Abstract:

Background: The case for continuing professional development has been well made, and was formalized in Ireland in recent years through the enactment of the Medical Practitioner’s Act, which requires registered medical practitioners to complete a minimum of 50 hours CPD each year. The ICGP, who have been providing CPD opportunities to its members for many years, have responded to this need by developing a series of evidence-based, high-quality, multimedia modules across a range of clinical and non-clinical areas. (More traditional education opportunities are still being provided by the college also). Overview of Programme: The first module was released in September 2011, since when the eLearning program has grown steadily, and there are currently almost 20 modules available, with a further 5 in production. Each module contains three to six 10-minute video lessons, which use a combination of graphics, images, text, voice-over and clinical clips. These are supported by supplementary videos of expert pieces-to-camera, Q&As with content experts, clinical scenarios, external links and relevant documentation and other resources. Successful completion of MCQs will result in a Certificate of Completion, which can be printed or stored in Professional Competence portfolio. The Medical Practitioner’s Act requires doctors to gather CPD credits across 8 domains of practice, and various eLearning modules have been developed to address each. For instance, modules with a strong clinical content would include Management of Hypertension, Management of COPD, and Management of Asthma. Other modules focus on health promotion such as Promoting Smoking Cessation, Promoting Physical Activity, and Addressing Childhood Obesity. Modules where communication skills are keys include modules on Suicide Prevention and Management of Depression. Other modules, currently in development include non-clinical topics around risk management, including Confidentiality, Consent etc. Each module is developed by a core group, which includes where possible, a GP with a special interest in the area, and a content expert(s). The college works closely with a medical education consultant and a production company in developing and producing the modules. Modules can be accessed (with password) through the ICGP website and are available free to all ICGP members. Summary of Evaluation: There are over 1700 registered users to date (over 55% of College membership). The program was evaluated using an online survey in 2013 (N = 144/950 – 12%) and results were very positive overall but provided material for the further improvement of the program also. Future Plans: While knowledge can be imparted well through eLearning, skills and attitudes are more difficult to influence through an online environment. The college is now developing a series of linked workshops, which will lead to ICGP Professional Competence Awards. The first pilot workshop is scheduled for February 2015 and is Cardiology-themed. Participants will be required to complete the following 4 modules in advance of attending – Management of Hypertension, Management of Heart Failure, Promoting Smoking Cessation, and Promoting Physical Activity. The workshop will be case-based and interactive, addressing ECG Interpretation in General Practice. Conclusions: The ICGP have responded to members needs for high-quality evidence-based education delivered in a way that suits GPs.

Keywords: CPD opportunities, evidence-based, high quality, multimedia modules across a range of clinical and non-clinical areas, medical practitioner’s act

Procedia PDF Downloads 596
906 Urban Compactness and Sustainability: Beijing Experience

Authors: Xilu Liu, Ameen Farooq

Abstract:

Beijing has several compact residential housing settings in many of its urban districts. The study in this paper reveals that urban compactness, as predictor of density, may carry an altogether different meaning in the developing world when compared to the U.S for achieving objectives of urban sustainability. Recent urban design studies in the U.S are debating for compact and mixed-use higher density housing to achieve sustainable and energy efficient living environments. While the concept of urban compactness is widely accepted as an approach in modern architectural and urban design fields, this belief may not directly carry well into all areas within cities of developing countries. Beijing’s technology-driven economy, with its historic and rich cultural heritage and a highly speculated real-estate market, extends its urban boundaries into multiple compact urban settings of varying scales and densities. The accelerated pace of migration from the countryside for better opportunities has led to unsustainable and uncontrolled buildups in order to meet the growing population demand within and outside of the urban center. This unwarranted compactness in certain urban zones has produced an unhealthy physical density with serious environmental and ecological challenging basic living conditions. In addition, crowding, traffic congestion, pollution and limited housing surrounding this compactness is a threat to public health. Several residential blocks in close proximity to each other were found quite compacted, or ill-planned, with residential sites due to lack of proper planning in Beijing. Most of them at first sight appear to be compact and dense but further analytical studies revealed that what appear to be dense actually are not as dense as to make a good case that could serve as the corner stone of sustainability and energy efficiency. This study considered several factors including floor area ratio (FAR), ground coverage (GSI), open space ratio (OSR) as indicators in analyzing urban compactness as a predictor of density. The findings suggest that these measures, influencing the density of residential sites under study, were much smaller in density than expected given their compact adjacencies. Further analysis revealed that several residential housing appear to support the notion of density in its compact layout but are actually compacted due to unregulated planning marred by lack of proper urban design standards, policies and guidelines specific to their urban context and condition.

Keywords: Beijing, density, sustainability, urban compactness

Procedia PDF Downloads 416
905 Changing the Biopower Hierarchy between Women’s Bodily Knowledge and the Medical Knowledge about the Body: The Case of Female Ejaculation and #Notpee

Authors: Lior B. Navon

Abstract:

The objective of this study is to investigate how technology, such as social media, can influence the biopower hierarchy between the medical knowledge about the body and women’s bodily knowledge through the case study of the hashtag 'notpee'. In January 2015, the hashtag #notpee, relating to a feminine physiological phenomenon called female ejaculation (FE) or squirting (SQ) started circulating on twitter. This hashtag, born as a reaction to a medical study claiming that SQ is essentially involuntary emission of urine during sexual activity, sparked an unusual public discourse about FE, a phenomenon that is usually not discussed or referred to in socio-legitimate public spheres. This unusual backlash got the attention of women’s magazines and blogs, as well as more mainstream large and respected outlets such as The Guardian and CNN. Both the tweets on twitter, as well as the media coverage of them, were mainly aimed at rejecting the research’s findings. While not offering an alternative and choosing to define the phenomenon by negation, women argued that the fluid extracted was not pee based on their personal experiences. Based on a critical discourse analysis of 742 tweets with the hashtag 'notpee' between January 2015 and January 2016, and of 15 articles covering the backlash, this study suggests that the #notpee backlash challenged the power balance between the medical knowledge about the feminine body and the feminine bodily knowledge through two different, yet related, forms of resistance to biopower. The first resistance is to the authority over knowledge production — who has the power to produce 'true' statements when it comes to the body? Is it the women who experience the phenomenon, or is it the medical institution? The second resistance to biopower has to do with what we regard as facts or veracity. A critical discourse analysis reveals that while both the scientific field, as well as the women arguing against its findings, use empirical information, they, nevertheless, rely on two dichotomic databases- while the scientific research relies on samples from the 'dead like body', these woman are relying on their lived subjective senses as a source for fact making. Nevertheless, while #notpee is asking to change the power relations between the feminine subjective bodily knowledge and the seemingly objective masculine medical knowledge about the body, it by no means dismisses it. These women are essentially asking the medical institution to take into consideration the subjective body as well as the objective one while acknowledging and accepting the power of the latter over knowledge production.

Keywords: biopower, female ejaculation, new media, bodily knowledge

Procedia PDF Downloads 152