Search results for: cutting edge technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8870

Search results for: cutting edge technology

980 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 465
979 An Interactive User-Oriented Approach to Optimizing Public Space Lighting

Authors: Tamar Trop, Boris Portnov

Abstract:

Public Space Lighting (PSL) of outdoor urban areas promotes comfort, defines spaces and neighborhood identities, enhances perceived safety and security, and contributes to residential satisfaction and wellbeing. However, if excessive or misdirected, PSL leads to unnecessary energy waste and increased greenhouse gas emissions, poses a non-negligible threat to the nocturnal environment, and may become a potential health hazard. At present, PSL is designed according to international, regional, and national standards, which consolidate best practice. Yet, knowledge regarding the optimal light characteristics needed for creating a perception of personal comfort and safety in densely populated residential areas, and the factors associated with this perception, is still scarce. The presented study suggests a paradigm shift in designing PSL towards a user-centered approach, which incorporates pedestrians' perspectives into the process. The study is an ongoing joint research project between China and Israel Ministries of Science and Technology. Its main objectives are to reveal inhabitants' perceptions of and preferences for PSL in different densely populated neighborhoods in China and Israel, and to develop a model that links instrumentally measured parameters of PSL (e.g., intensity, spectra and glare) with its perceived comfort and quality, while controlling for three groups of attributes: locational, temporal, and individual. To investigate measured and perceived PSL, the study employed various research methods and data collection tools, developed a location-based mobile application, and used multiple data sources, such as satellite multi-spectral night-time light imagery, census statistics, and detailed planning schemes. One of the study’s preliminary findings is that higher sense of safety in the investigated neighborhoods is not associated with higher levels of light intensity. This implies potential for energy saving in brightly illuminated residential areas. Study findings might contribute to the design of a smart and adaptive PSL strategy that enhances pedestrians’ perceived safety and comfort while reducing light pollution and energy consumption.

Keywords: energy efficiency, light pollution, public space lighting, PSL, safety perceptions

Procedia PDF Downloads 135
978 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 247
977 Visualization of Chinese Genealogies with Digital Technology: A Case of Genealogy of Wu Clan in the Village of Gaoqian

Authors: Huiling Feng, Jihong Liang, Xiaodong Gong, Yongjun Xu

Abstract:

Recording history is a tradition in ancient China. A record of a dynasty makes a dynastic history; a record of a locality makes a chorography, and a record of a clan makes a genealogy – the three combined together depicts a complete national history of China both macroscopically and microscopically, with genealogy serving as the foundation. Genealogy in ancient China traces back to a family tree or pedigrees in the early and medieval historical times. After Song Dynasty, the civilian society gradually emerged, and the Emperor had to allow people from the same clan to live together and hold the ancestor worship activities, thence compilation of genealogy became popular in the society. Since then, genealogies, regarded as important as ancestor and religious temples in a traditional villages even today, have played a primary role in identification of a clan and maintain local social order. Chinese genealogies are rich in their documentary materials. Take the Genealogy of Wu Clan in Gaoqian as an example. Gaoqian is a small village in Xianju County of Zhejiang Province. The Genealogy of Wu Clan in Gaoqian is composed of a whole set of materials from Foreword to Family Trees, Family Rules, Family Rituals, Family Graces and Glories, Ode to An ancestor’s Portrait, Manual for the Ancestor Temple, documents for great men in the clan, works written by learned men in the clan, the contracts concerning landed property, even notes on tombs and so on. Literally speaking, the genealogy, with detailed information from every aspect recorded in stylistic rules, is indeed the carrier of the entire culture of a clan. However, due to their scarcity in number and difficulties in reading, genealogies seldom fall into the horizons of common people. This paper, focusing on the case of the Genealogy of Wu Clan in the Village of Gaoqian, intends to reproduce a digital Genealogy by use of ICTs, through an in-depth interpretation of the literature and field investigation in Gaoqian Village. Based on this, the paper goes further to explore the general methods in transferring physical genealogies to digital ones and ways in visualizing the clanism culture embedded in the genealogies with a combination of digital technologies such as software in family trees, multimedia narratives, animation design, GIS application and e-book creators.

Keywords: clanism culture, multimedia narratives, genealogy of Wu Clan, GIS

Procedia PDF Downloads 222
976 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 272
975 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 374
974 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 145
973 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 63
972 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data

Authors: Minjuan Sun

Abstract:

Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.

Keywords: credit score, digital footprint, Fintech, machine learning

Procedia PDF Downloads 165
971 Impact of Pedagogical Techniques on the Teaching of Sports Sciences

Authors: Muhammad Saleem

Abstract:

Background: The teaching of sports sciences encompasses a broad spectrum of disciplines, including biomechanics, physiology, psychology, and coaching. Effective pedagogical techniques are crucial in imparting both theoretical knowledge and practical skills necessary for students to excel in the field. The impact of these techniques on students’ learning outcomes, engagement, and professional preparedness remains a vital area of study. Objective: This study aims to evaluate the effectiveness of various pedagogical techniques used in the teaching of sports sciences. It seeks to identify which methods most significantly enhance student learning, retention, engagement, and practical application of knowledge. Methods: A mixed-methods approach was employed, including both quantitative and qualitative analyses. The study involved a comparative analysis of traditional lecture-based teaching, experiential learning, problem-based learning (PBL), and technology-enhanced learning (TEL). Data were collected through surveys, interviews, and academic performance assessments from students enrolled in sports sciences programs at multiple universities. Statistical analysis was used to evaluate academic performance, while thematic analysis was applied to qualitative data to capture student experiences and perceptions. Results: The findings indicate that experiential learning and PBL significantly improve students' understanding and retention of complex sports science concepts compared to traditional lectures. TEL was found to enhance engagement and provide students with flexible learning opportunities, but its impact on deep learning varied depending on the quality of the digital resources. Overall, a combination of experiential learning, PBL, and TEL was identified as the most effective pedagogical approach, leading to higher student satisfaction and better preparedness for real-world applications. Conclusion: The study underscores the importance of adopting diverse and student-centered pedagogical techniques in the teaching of sports sciences. While traditional lectures remain useful for foundational knowledge, integrating experiential learning, PBL, and TEL can substantially improve student outcomes. These findings suggest that educators should consider a blended approach to pedagogy to maximize the effectiveness of sports science education.

Keywords: sport sciences, pedagogical techniques, health and physical education, problem-based learning, student engagement

Procedia PDF Downloads 28
970 Evaluation of the Physico-Chemical and Microbial Properties of the Compost Leachate (CL) to Assess Its Role in the Bioremediation of Polyaromatic Hydrocarbons (PAHs)

Authors: Omaima A. Sharaf, Tarek A. Moussa, Said M. Badr El-Din, H. Moawad

Abstract:

Background: Polycyclic aromatic hydrocarbons (PAHs) pose great environmental and human health concerns for their widespread occurrence, persistence, and carcinogenic properties. PAHs releases due to anthropogenic activities to the wider environment have led to higher concentrations of these contaminants than would be expected from natural processes alone. This may result in a wide range of environmental problems that can accumulate in agricultural ecosystems, which threatened to become a negative impact on sustainable agricultural development. Thus, this study aimed to evaluate the physico-chemical, and microbial properties of the compost leachate (CL) to assess its role as nutrient and microbial source (biostimulation/bioaugmentation) for developing a cost-effective bioremediation technology for PAHs contaminated sites. Material and Methods: PAHs-degrading bacteria were isolated from CL that was collected from a composting site located in central Scotland, UK. Isolation was carried out by enrichment using phenanthrene (PHR), pyrene (PYR) and benzo(a)pyrene (BaP) as the sole source of carbon and energy. The isolates were characterized using a variety of phenotypic and molecular properties. Six different isolates were identified based on the difference in morphological and biochemical tests. The efficiency of these isolates in PAHs utilization was assessed. Further analysis was performed to define taxonomical status and phylogenic relation between the most potent PAHs-utilizing bacterial strains and other standard strains, using molecular approach by partial 16S rDNA gene sequence analysis. Results indicated that the 16S rDNA sequence analysis confirmed the results of biochemical identification, as both of biochemical and molecular identification of the isolates assigned them to Bacillus licheniformis, Pseudomonas aeruginosa, Alcaligenes faecalis, Serratia marcescens, Enterobacter cloacae and Providenicia which were identified as the prominent PAHs-utilizers isolated from CL. Conclusion: This study indicates that the CL samples contain a diverse population of PAHs-degrading bacteria and the use of CL may have a potential for bioremediation of PAHs contaminated sites.

Keywords: polycyclic aromatic hydrocarbons, physico-chemical analyses, compost leachate, microbial and biochemical analyses, phylogenic relations, 16S rDNA sequence analysis

Procedia PDF Downloads 266
969 What Are the Problems in the Case of Analysis of Selenium by Inductively Coupled Plasma Mass Spectrometry in Food and Food Raw Materials?

Authors: Béla Kovács, Éva Bódi, Farzaneh Garousi, Szilvia Várallyay, Dávid Andrási

Abstract:

For analysis of elements in different food, feed and food raw material samples generally a flame atomic absorption spectrometer (FAAS), a graphite furnace atomic absorption spectrometer (GF-AAS), an inductively coupled plasma optical emission spectrometer (ICP-OES) and an inductively coupled plasma mass spectrometer (ICP-MS) are applied. All the analytical instruments have different physical and chemical interfering effects analysing food and food raw material samples. The smaller the concentration of an analyte and the larger the concentration of the matrix the larger the interfering effects. Nowadays, it is very important to analyse growingly smaller concentrations of elements. From the above analytical instruments generally the inductively coupled plasma mass spectrometer is capable of analysing the smallest concentration of elements. The applied ICP-MS instrument has Collision Cell Technology (CCT) also. Using CCT mode certain elements have better detection limits with 1-3 magnitudes comparing to a normal ICP-MS analytical method. The CCT mode has better detection limits mainly for analysis of selenium (arsenic, germanium, vanadium, and chromium). To elaborate an analytical method for selenium with an inductively coupled plasma mass spectrometer the most important interfering effects (problems) were evaluated: 1) isobaric elemental, 2) isobaric molecular, and 3) physical interferences. Analysing food and food raw material samples an other (new) interfering effect emerged in ICP-MS, namely the effect of various matrixes having different evaporation and nebulization effectiveness, moreover having different quantity of carbon content of food, feed and food raw material samples. In our research work the effect of different water-soluble compounds furthermore the effect of various quantity of carbon content (as sample matrix) were examined on changes of intensity of selenium. So finally we could find “opportunities” to decrease the error of selenium analysis. To analyse selenium in food, feed and food raw material samples, the most appropriate inductively coupled plasma mass spectrometer is a quadrupole instrument applying a collision cell technique (CCT). The extent of interfering effect of carbon content depends on the type of compounds. The carbon content significantly affects the measured concentration (intensities) of Se, which can be corrected using internal standard (arsenic or tellurium).

Keywords: selenium, ICP-MS, food, food raw material

Procedia PDF Downloads 508
968 Ultrasonic Micro Injection Molding: Manufacturing of Micro Plates of Biomaterials

Authors: Ariadna Manresa, Ines Ferrer

Abstract:

Introduction: Ultrasonic moulding process (USM) is a recent injection technology used to manufacture micro components. It is able to melt small amounts of material so the waste of material is certainly reduced comparing to microinjection molding. This is an important advantage when the materials are expensive like medical biopolymers. Micro-scaled components are involved in a variety of uses, such as biomedical applications. It is required replication fidelity so it is important to stabilize the process and minimize the variability of the responses. The aim of this research is to investigate the influence of the main process parameters on the filling behaviour, the dimensional accuracy and the cavity pressure when a micro-plate is manufactured by biomaterials such as PLA and PCL. Methodology or Experimental Procedure: The specimens are manufactured using a Sonorus 1G Ultrasound Micro Molding Machine. The used geometry is a rectangular micro-plate of 15x5mm and 1mm of thickness. The materials used for the investigation are PLA and PCL due to biocompatible and degradation properties. The experimentation is divided into two phases. Firstly, the influence of process parameters (vibration amplitude, sonotrodo velocity, ultrasound time and compaction force) on filling behavior is analysed, in Phase 1. Next, when filling cavity is assured, the influence of both cooling time and force compaction on the cavity pressure, part temperature and dimensional accuracy is instigated, which is done in Phase. Results and Discussion: Filling behavior depends on sonotrodo velocity and vibration amplitude. When the ultrasonic time is higher, more ultrasonic energy is applied and the polymer temperature increases. Depending on the cooling time, it is possible that when mold is opened, the micro-plate temperature is too warm. Consequently, the polymer relieve its stored internal energy (ultrasonic and thermal) expanding through the easier direction. This fact is reflected on dimensional accuracy, causing micro-plates thicker than the mold. It has also been observed the most important fact that affects cavity pressure is the compaction configuration during the manufacturing cycle. Conclusions: This research demonstrated the influence of process parameters on the final micro-plated manufactured. Future works will be focused in manufacturing other geometries and analysing the mechanical properties of the specimens.

Keywords: biomaterial, biopolymer, micro injection molding, ultrasound

Procedia PDF Downloads 284
967 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study

Authors: D. M. Samartsev, A. G. Copping

Abstract:

As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.

Keywords: analysis, architecture, automation, design process, technology

Procedia PDF Downloads 105
966 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving

Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco

Abstract:

Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.

Keywords: augmented reality, driving, physiological signals, test platform

Procedia PDF Downloads 142
965 Performance of HVOF Sprayed Ni-20CR and Cr3C2-NiCr Coatings on Fe-Based Superalloy in an Actual Industrial Environment of a Coal Fired Boiler

Authors: Tejinder Singh Sidhu

Abstract:

Hot corrosion has been recognized as a severe problem in steam-powered electricity generation plants and industrial waste incinerators as it consumes the material at an unpredictably rapid rate. Consequently, the load-carrying ability of the components reduces quickly, eventually leading to catastrophic failure. The inability to either totally prevent hot corrosion or at least detect it at an early stage has resulted in several accidents, leading to loss of life and/or destruction of infrastructures. A number of countermeasures are currently in use or under investigation to combat hot corrosion, such as using inhibitors, controlling the process parameters, designing a suitable industrial alloy, and depositing protective coatings. However, the protection system to be selected for a particular application must be practical, reliable, and economically viable. Due to the continuously rising cost of the materials as well as increased material requirements, the coating techniques have been given much more importance in recent times. Coatings can add value to products up to 10 times the cost of the coating. Among the different coating techniques, thermal spraying has grown into a well-accepted industrial technology for applying overlay coatings onto the surfaces of engineering components to allow them to function under extreme conditions of wear, erosion-corrosion, high-temperature oxidation, and hot corrosion. In this study, the hot corrosion performances of Ni-20Cr and Cr₃C₂-NiCr coatings developed by High Velocity Oxy-Fuel (HVOF) process have been studied. The coatings were developed on a Fe-based superalloy, and experiments were performed in an actual industrial environment of a coal-fired boiler. The cyclic study was carried out around the platen superheater zone where the temperature was around 1000°C. The study was conducted for 10 cycles, and one cycle was consisting of 100 hours of heating followed by 1 hour of cooling at ambient temperature. Both the coatings deposited on Fe-based superalloy imparted better hot corrosion resistance than the uncoated one. The Ni-20Cr coated superalloy performed better than the Cr₃C₂-NiCr coated in the actual working conditions of the coal fired boiler. It is found that the formation of chromium oxide at the boundaries of Ni-rich splats of the coating blocks the inward permeation of oxygen and other corrosive species to the substrate.

Keywords: hot corrosion, coating, HVOF, oxidation

Procedia PDF Downloads 85
964 Pregnancy and Birth Outcomes of Single versus Multiple Embryo Transfer in Gestational Surrogacy Arrangements: A Systematic Review

Authors: Jutharat Attawet, Alex Y. Wang, Cindy M. Farquhar, Elizabeth A. Sullivan

Abstract:

Background: Adverse maternal and perinatal outcomes of multiple pregnancies resulting from multiple embryo transfers (ET) has become significant concerns. This is particularly relevant for gestational carriers since they usually do not have infertility issues. Single embryo transfer (SET) therefore has been encouraged to assist reproductive technology (ART) practice in order to reduce multiple pregnancies. Objectives: This systematic review aims to investigate the pregnancy and birth outcomes of SET and multiple ET in surrogacy arrangements. Search methods: This study is a systematic review. Electronic databases were searched from CINAHL, Medline, Embase, Scopus and ProQuest for studies from 1980 to 2017. Cross-references and national ART reports were also manual searchings. Articles without restriction of English language and study types were accessed. Carrier cycles involving in SET and multiple ET were identified in database searching. The main outcome measures including clinical pregnancy, live delivery and multiple deliveries per gestational carrier cycle were compared between SET and multiple ET. Mantel-Haenzel risk ratios (RRs) with 95% confidence intervals (CIs), using the numbers of outcome events in SET and multiple ET of each study were calculated suing RevMan5.3. Outcomes: The search returned 97 articles of which 5 met the inclusion criteria. Approximately 50% of carrier cycles were transferred a single embryo and 50% were transferred more than one embryo. The clinical pregnancy rate (CPR) was 39% for SET and 53% for multiple ET, which was not significantly different with RR = 0.83 (95% CI: 0.67-1.03). The live delivery rate was 33% for SET and 57% for multiple ET which was not significantly different with RR = 0.78 (95% CI: 0.61-1.00). The multiple delivery rate per carrier was greater risks in the multiple ET carrier cycles (RR =0.4, 95% CI: 0.01-0.26). There were 104 sets of twins (including one set of twins selectively reduced from triplets to twins) and 1 set of triples in the multiple ET carrier cycle. In the SET carrier cycles, there were 2 sets of twins. Significance of the study: SET should be advocated among surrogate carriers to prevent multiple pregnancies and subsequent adverse outcomes for both carrier and baby. Surrogacy practice should be reviewed and surrogate carriers should be fully informed of the risk of adverse maternal and birth outcome of multiple pregnancies due to multiple embryo transfers.

Keywords: assisted reproduction, birth outcomes, carrier, gestational surrogacy, multiple embryo transfer, multiple pregnancy, pregnancy outcomes, single embryo transfer, surrogate mother, systematic review

Procedia PDF Downloads 404
963 Effects of Using a Recurrent Adverse Drug Reaction Prevention Program on Safe Use of Medicine among Patients Receiving Services at the Accident and Emergency Department of Songkhla Hospital Thailand

Authors: Thippharat Wongsilarat, Parichat tuntilanon, Chonlakan Prataksitorn

Abstract:

Recurrent adverse drug reactions are harmful to patients with mild to fatal illnesses, and affect not only patients but also their relatives, and organizations. To compare safe use of medicine among patients before and after using the recurrent adverse drug reaction prevention program . Quasi-experimental research with the target population of 598 patients with drug allergy history. Data were collected through an observation form tested for its validity by three experts (IOC = 0.87), and analyzed with a descriptive statistic (percentage). The research was conducted jointly with a multidisciplinary team to analyze and determine the weak points and strong points in the recurrent adverse drug reaction prevention system during the past three years, and 546, 329, and 498 incidences, respectively, were found. Of these, 379, 279, and 302 incidences, or 69.4; 84.80; and 60.64 percent of the patients with drug allergy history, respectively, were found to have caused by incomplete warning system. In addition, differences in practice in caring for patients with drug allergy history were found that did not cover all the steps of the patient care process, especially a lack of repeated checking, and a lack of communication between the multidisciplinary team members. Therefore, the recurrent adverse drug reaction prevention program was developed with complete warning points in the information technology system, the repeated checking step, and communication among related multidisciplinary team members starting from the hospital identity card room, patient history recording officers, nurses, physicians who prescribe the drugs, and pharmacists. Including in the system were surveillance, nursing, recording, and linking the data to referring units. There were also training concerning adverse drug reactions by pharmacists, monthly meetings to explain the process to practice personnel, creating safety culture, random checking of practice, motivational encouragement, supervising, controlling, following up, and evaluating the practice. The rate of prescribing drugs to which patients were allergic per 1,000 prescriptions was 0.08, and the incidence rate of recurrent drug reaction per 1,000 prescriptions was 0. Surveillance of recurrent adverse drug reactions covering all service providing points can ensure safe use of medicine for patients.

Keywords: recurrent drug, adverse reaction, safety, use of medicine

Procedia PDF Downloads 457
962 Professional Development in EFL Classroom: Motivation and Reflection

Authors: Iman Jabbar

Abstract:

Within the scope of professionalism and in order to compete with the modern world, teachers, are expected to develop their teaching skills and activities in addition to their professional knowledge. At the college level, the teacher should be able to face classroom challenges through his engagement with the learning situation to understand the students and their needs. In our field of TESOL, the role of the English teacher is no longer restricted to teaching English texts, but rather he should endeavor to enhance the students’ skills such as communication and critical analysis. Within the literature of professionalism, there are certain strategies and tools that an English teacher should adopt to develop his competence and performance. Reflective practice, which is an exploratory process, is one of these strategies. Another strategy contributing to classroom development is motivation. It is crucial in students’ learning as it affects the quality of learning English in the classroom in addition to determining success or failure as well as language achievement. This is a qualitative study grounded on interpretive perspectives of teachers and students regarding the process of professional development. This study aims at (a) understanding how teachers at the college level conceptualize reflective practice and motivation inside EFL classroom, and (b) exploring the methods and strategies that they implement to practice reflection and motivation. This study and is based on two questions: 1. How do EFL teachers perceive and view reflection and motivation in relation to their teaching and professional development? 2. How can reflective practice and motivation be developed into practical strategies and actions in EFL teachers’ professional context? The study is organized into two parts, theoretical and practical. The theoretical part reviews the literature on the concept of reflective practice and motivation in relation to professional development through providing certain definitions, theoretical models, and strategies. The practical part draws on the theoretical one, however; it is the core of the study since it deals with two issues. It involves the research design, methodology, and methods of data collection, sampling, and data analysis. It ends up with an overall discussion of findings and the researcher's reflections on the investigated topic. In terms of significance, the study is intended to contribute to the field of TESOL at the academic level through the selection of the topic and investigating it from theoretical and practical perspectives. Professional development is the path that leads to enhancing the quality of teaching English as a foreign or second language in a way that suits the modern trends of globalization and advanced technology.

Keywords: professional development, motivation, reflection, learning

Procedia PDF Downloads 452
961 STEAM and Project-Based Learning: Equipping Young Women with 21st Century Skills

Authors: Sonia Saddiqui, Maya Marcus

Abstract:

UTS STEAMpunk Girls is an educational program for young women (aged 12-16), to empower them to be more informed and active members of the 21st century workforce. With the number of STEM graduates on the decline, especially among young women, an additional aim of the program is to trial a STEAM (Science, Technology, Engineering, Arts/Humanities/Social Sciences, Mathematics), inter-disciplinary approach to improving STEM engagement. In-line with UNESCO’s recent focus on promoting ‘transversal competencies’ in future graduates, the program utilised co-design, project-based learning, entrepreneurial processes, and inter-disciplinary learning. The program consists of two phases. Taking a participatory design approach, the first phase (co-design workshops) provided valuable insight into student perspectives around engaging young women in STEM and inter-disciplinary thinking. The workshops positioned 26 young women from three schools as subject matter experts (SMEs), providing a platform for them to share their opinions, experiences and findings around the STEAM disciplines. The second (pilot) phase put the co-design phase findings into practice, with 64 students from four schools working in groups to articulate problems with real-world implications, and utilising design-thinking to solve them. The pilot phase utilised project-based learning to engage young women in entrepreneurial and STEAM frameworks and processes. Scalable program design and educational resources were trialed to determine appropriate mechanisms for engaging young women in STEM and in STEAM thinking. Across both phases, data was collected via longitudinal surveys to obtain pre-program, baseline attitudinal information, and compare that against post-program responses. Preliminary findings revealed students’ improved understanding of the STEM disciplines, industries and professions, improved awareness of STEAM as a concept, and improved understanding regarding inter-disciplinary and design thinking. Program outcomes will be of interest to high-school educators in both STEM and the Arts, Humanities and Social Sciences fields, and will hopefully inform future programmatic approaches to introducing inter-disciplinary STEAM learning in STEM curriculum.

Keywords: co-design, STEM, STEAM, project-based learning, inter-disciplinary

Procedia PDF Downloads 199
960 Preparation and CO2 Permeation Properties of Carbonate-Ceramic Dual-Phase Membranes

Authors: H. Ishii, S. Araki, H. Yamamoto

Abstract:

In recent years, the carbon dioxide (CO2) separation technology is required in terms of the reduction of emission of global warming gases and the efficient use of fossil fuels. Since the emission amount of CO2 gas occupies the large part of greenhouse effect gases, it is considered that CO2 have the most influence on global warming. Therefore, we need to establish the CO2 separation technologies with high efficiency at low cost. In this study, we focused on the membrane separation compared with conventional separation technique such as distillation or cryogenic separation. In this study, we prepared carbonate-ceramic dual-phase membranes to separate CO2 at high temperature. As porous ceramic substrate, the (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+σ, La0.6Sr0.4Ti0.3 Fe0.7O3 and Ca0.8Sr0.2Ti0.7Fe0.3O3-α (PLNCG, LSTF and CSTF) were examined. PLNCG, LSTF and CSTF have the perovskite structure. The perovskite structure has high stability and shows ion-conducting doped by another metal ion. PLNCG, LSTF and CSTF have perovskite structure and has high stability and high oxygen ion diffusivity. PLNCG, LSTF and CSTF powders were prepared by a solid-phase process using the appropriate carbonates or oxides. To prepare porous substrates, these powders mixed with carbon black (20 wt%) and a few drops of polyvinyl alcohol (5 wt%) aqueous solution. The powder mixture were packed into stainless steel mold (13 mm) and uniaxially pressed into disk shape under a pressure of 20 MPa for 1 minute. PLNCG, LSTF and CSTF disks were calcined in air for 6 h at 1473, 1573 and 1473 K, respectively. The carbonate mixture (Li2CO3/Na2CO3/K2CO3: 42.5/32.5/25 in mole percent ratio) was placed inside a crucible and heated to 793 K. Porous substrates were infiltrated with the molten carbonate mixture at 793 K. Crystalline structures of the fresh membranes and after the infiltration with the molten carbonate mixtures were determined by X-ray diffraction (XRD) measurement. We confirmed the crystal structure of PLNCG and CSTF slightly changed after infiltration with the molten carbonate mixture. CO2 permeation experiments with PLNCG-carbonate, LSTF-carbonate and CSTF-carbonate membranes were carried out at 773-1173 K. The gas mixture of CO2 (20 mol%) and He was introduced at the flow rate of 50 ml/min to one side of membrane. The permeated CO2 was swept by N2 (50 ml/min). We confirmed the effect of ceramic materials and temperature on the CO2 permeation at high temperature.

Keywords: membrane, perovskite structure, dual-phase, carbonate

Procedia PDF Downloads 367
959 Architectural Identity in Manifestation of Tall-buildings' Design

Authors: Huda Arshadlamphon

Abstract:

Advancing frontiers of technology and industry is moving rapidly fast influenced by the economic and political phenomena. One vital phenomenon,which has had consolidated the world to a one single village, is Globalization. In response, architecture and the built-environment have faced numerous changes, adjustments, and developments. Tall-buildings, as a product of globalization, represent prestigious icons, symbols, and landmarks for highly economics and advanced countries. Despite the fact, this trend has been encountering several design challenges incorporating architectural identity, traditions, and characteristics that enhance the built-environments' sociocultural values and traditions. The necessity of these values and traditionsform self-solitarily, leading to visual and spatial creativity, independency, and individuality. In other words, they maintain the inherited identity and avoid replications in all means and aspects. This paper, firstly, defines globalization phenomenon, architectural identity, and the concerns of sociocultural values in relation to the traditional characteristics of the built-environment. Secondly, through three case-studies of tall-buildings located in Jeddah city, Saudi Arabia, the Queen's Building, the National Commercial Bank Building (NCB), and the Islamic Development Bank Building; design strategies and methodologies in acclimating architectural identity and characteristics in tall-buildings are discussed. The case-studies highlight buildings' sites and surroundings, concepts and inspirations, design elements, architectural forms and compositions, characteristics, issues, barriers, and trammels facing the designs' decisions, representation of facades, and selection of materials and colors. Furthermore, the research will elucidate briefs of the dominant factors that shape the architectural identity of Jeddah city. In conclusion, the study manifests four tall-buildings' design standards guideline in preserving and developing architectural identity in Jeddah city; the scale of urban and natural environment, the scale of architectural design elements, the integration of visual images, and the creation of spatial scenes and scenarios. The prosed guideline will encourage the development of architectural identity aligned with zeitgeist demands and requirements, supports the contemporary architectural movement toward tall-buildings, and shoresself-solitarily in representing sociocultural values and traditions of the built-environment.

Keywords: architectural identity, built-environment, globalization, sociocultural values and traditions, tall-buildings

Procedia PDF Downloads 164
958 An Investigation into the Influence of Compression on 3D Woven Preform Thickness and Architecture

Authors: Calvin Ralph, Edward Archer, Alistair McIlhagger

Abstract:

3D woven textile composites continue to emerge as an advanced material for structural applications and composite manufacture due to their bespoke nature, through thickness reinforcement and near net shape capabilities. When 3D woven preforms are produced, they are in their optimal physical state. As 3D weaving is a dry preforming technology it relies on compression of the preform to achieve the desired composite thickness, fibre volume fraction (Vf) and consolidation. This compression of the preform during manufacture results in changes to its thickness and architecture which can often lead to under-performance or changes of the 3D woven composite. Unlike traditional 2D fabrics, the bespoke nature and variability of 3D woven architectures makes it difficult to know exactly how each 3D preform will behave during processing. Therefore, the focus of this study is to investigate the effect of compression on differing 3D woven architectures in terms of structure, crimp or fibre waviness and thickness as well as analysing the accuracy of available software to predict how 3D woven preforms behave under compression. To achieve this, 3D preforms are modelled and compression simulated in Wisetex with varying architectures of binder style, pick density, thickness and tow size. These architectures have then been woven with samples dry compression tested to determine the compressibility of the preforms under various pressures. Additional preform samples were manufactured using Resin Transfer Moulding (RTM) with varying compressive force. Composite samples were cross sectioned, polished and analysed using microscopy to investigate changes in architecture and crimp. Data from dry fabric compression and composite samples were then compared alongside the Wisetex models to determine accuracy of the prediction and identify architecture parameters that can affect the preform compressibility and stability. Results indicate that binder style/pick density, tow size and thickness have a significant effect on compressibility of 3D woven preforms with lower pick density allowing for greater compression and distortion of the architecture. It was further highlighted that binder style combined with pressure had a significant effect on changes to preform architecture where orthogonal binders experienced highest level of deformation, but highest overall stability, with compression while layer to layer indicated a reduction in fibre crimp of the binder. In general, simulations showed a relative comparison to experimental results; however, deviation is evident due to assumptions present within the modelled results.

Keywords: 3D woven composites, compression, preforms, textile composites

Procedia PDF Downloads 136
957 Optimal Allocation of Battery Energy Storage Considering Stiffness Constraints

Authors: Felipe Riveros, Ricardo Alvarez, Claudia Rahmann, Rodrigo Moreno

Abstract:

Around the world, many countries have committed to a decarbonization of their electricity system. Under this global drive, converter-interfaced generators (CIG) such as wind and photovoltaic generation appear as cornerstones to achieve these energy targets. Despite its benefits, an increasing use of CIG brings several technical challenges in power systems, especially from a stability viewpoint. Among the key differences are limited short circuit current capacity, inertia-less characteristic of CIG, and response times within the electromagnetic timescale. Along with the integration of CIG into the power system, one enabling technology for the energy transition towards low-carbon power systems is battery energy storage systems (BESS). Because of the flexibility that BESS provides in power system operation, its integration allows for mitigating the variability and uncertainty of renewable energies, thus optimizing the use of existing assets and reducing operational costs. Another characteristic of BESS is that they can also support power system stability by injecting reactive power during the fault, providing short circuit currents, and delivering fast frequency response. However, most methodologies for sizing and allocating BESS in power systems are based on economic aspects and do not exploit the benefits that BESSs can offer to system stability. In this context, this paper presents a methodology for determining the optimal allocation of battery energy storage systems (BESS) in weak power systems with high levels of CIG. Unlike traditional economic approaches, this methodology incorporates stability constraints to allocate BESS, aiming to mitigate instability issues arising from weak grid conditions with low short-circuit levels. The proposed methodology offers valuable insights for power system engineers and planners seeking to maintain grid stability while harnessing the benefits of renewable energy integration. The methodology is validated in the reduced Chilean electrical system. The results show that integrating BESS into a power system with high levels of CIG with stability criteria contributes to decarbonizing and strengthening the network in a cost-effective way while sustaining system stability. This paper potentially lays the foundation for understanding the benefits of integrating BESS in electrical power systems and coordinating their placements in future converter-dominated power systems.

Keywords: battery energy storage, power system stability, system strength, weak power system

Procedia PDF Downloads 61
956 Suggestions to the Legislation about Medical Ethics and Ethics Review in the Age of Medical Artificial Intelligence

Authors: Xiaoyu Sun

Abstract:

In recent years, the rapid development of Artificial Intelligence (AI) has extensively promoted medicine, pharmaceutical, and other related fields. The medical research and development of artificial intelligence by scientific and commercial organizations are on the fast track. The ethics review is one of the critical procedures of registration to get the products approved and launched. However, the SOPs for ethics review is not enough to guide the healthy and rapid development of artificial intelligence in healthcare in China. Ethical Review Measures for Biomedical Research Involving Human Beings was enacted by the National Health Commission of the People's Republic of China (NHC) on December 1st, 2016. However, from a legislative design perspective, it was neither updated timely nor in line with the trends of AI international development. Therefore, it was great that NHC published a consultation paper on the updated version on March 16th, 2021. Based on the most updated laws and regulations in the States and EU, and in-depth-interviewed 11 subject matter experts in China, including lawmakers, regulators, and key members of ethics review committees, heads of Regulatory Affairs in SaMD industry, and data scientists, several suggestions were proposed on top of the updated version. Although the new version indicated that the Ethics Review Committees need to be created by National, Provincial and individual institute levels, the review authorities of different levels were not clarified. The suggestion is that the precise scope of review authorities for each level should be identified based on Risk Analysis and Management Model, such as the complicated leading technology, gene editing, should be reviewed by National Ethics Review Committees, it will be the job of individual institute Ethics Review Committees to review and approve the clinical study with less risk such as an innovative cream to treat acne. Furthermore, to standardize the research and development of artificial intelligence in healthcare in the age of AI, more clear guidance should be given to data security in the layers of data, algorithm, and application in the process of ethics review. In addition, transparency and responsibility, as two of six principles in the Rome Call for AI Ethics, could be further strengthened in the updated version. It is the shared goal among all countries to manage well and develop AI to benefit human beings. Learned from the other countries who have more learning and experience, China could be one of the most advanced countries in artificial intelligence in healthcare.

Keywords: biomedical research involving human beings, data security, ethics committees, ethical review, medical artificial intelligence

Procedia PDF Downloads 168
955 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 78
954 Study on the Post-Traumatic Stress Disorder and Its Psycho-Social-Genetic Risk Factors among Tibetan Alolescents in Heavily-Hit Area Three Years after Yushu Earthquake in Qinghai Province, China

Authors: Xiaolian Jiang, Dongling Liu, Kun Liu

Abstract:

Aims: To examine the prevalence of POST-TRAUMATIC STRESS DISORDER (PTSD) symptoms among Tibetan adolescents in heavily-hit disaster area three years after Yushu earthquake, and to explore the interactions of the psycho-social-genetic risk factors. Methods: This was a three-stage study. Firstly, demographic variables,PTSD Checklist-Civilian Version (PCL-C),the Internality、Powerful other、Chance Scale,(IPC),Coping Style Scale(CSS),and the Social Support Appraisal(SSA)were used to explore the psychosocial factors of PTSD symptoms among adolescent survivors. PCL-C was used to examine the PTSD symptoms among 4072 Tibetan adolescents,and the Structured Clinical Interview for DSM-IV Disorders(SCID)was used by psychiatrists to make the diagnosis precisely. Secondly,a case-control trial was used to explore the relationship between PTSD and gene polymorphisms. 287adolescents diagnosed with PTSD were recruited in study group, and 280 adolescents without PTSD in control group. Polymerase chain reaction-restriction fragment length polymorphism technology(PCR-RFLP)was used to test gene polymorphisms. Thirdly,SPSS 22.0 was used to explore the interactions of the psycho-social-genetic risk factors of PTSD on the basis of the above results. Results and conclusions: 1.The prevalence of PTSD was 9.70%. 2.The predictive psychosocial factors of PTSD included earthquake exposure, support from others, imagine, abreact, tolerant, powerful others and family support. 3.Synergistic interactions between A1 gene of DRD2 TaqIA and the external locus of control, negative coping style, severe earthquake exposure were found. Antagonism interactions between A1 gene of DRD2 TaqIA and poor social support was found. Synergistic interactions between A1/A1 genotype and the external locus of control, negative coping style were found. Synergistic interactions between 12 gene of 5-HTTVNTR and the external locus of control, negative coping style, severe earthquake exposure were found. Synergistic interactions between 12/12 genotype and the external locus of control, negative coping style, severe earthquake exposure were also found.

Keywords: adolescents, earthquake, PTSD, risk factors

Procedia PDF Downloads 153
953 Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy

Authors: Nyashadzashe Chiraga, Anthony Walker, Glen Bright

Abstract:

The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.

Keywords: information entropy, communication in manufacturing, mass customisation, scheduling

Procedia PDF Downloads 247
952 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer

Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo

Abstract:

Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.

Keywords: boom, knapsack, farm, sprayer, wheel axle

Procedia PDF Downloads 283
951 Micro Plasma an Emerging Technology to Eradicate Pesticides from Food Surface

Authors: Muhammad Saiful Islam Khan, Yun Ji Kim

Abstract:

Organophosphorus pesticides (OPPs) have been widely used to replace more persistent organochlorine pesticides because OPPs are more soluble in water and decompose rapidly in aquatic systems. Extensive uses of OPPs in modern agriculture are the major cause of the contamination of surface water. Regardless of the advantages gained by the application of pesticides in modern agriculture, they are a threat to the public health environment. With the aim of reducing possible health threats, several physical and chemical treatment processes have been studied to eliminate biological and chemical poisons from food stuff. In the present study, a micro-plasma device was used to reduce pesticides from the surface of food stuff. Pesticide free food items chosen in this study were perilla leaf, tomato, broccoli and blueberry. To evaluate the removal efficiency of pesticides, different washing methods were followed such as soaking with water, washing with bubbling water, washing with plasma-treated water and washing with chlorine water. 2 mL of 2000 ppm pesticide samples, namely, diazinone and chlorpyrifos were individuality inoculated on food surface and was air dried for 2 hours before treated with plasma. Plasma treated water was used in two different manners one is plasma treated water with bubbling the other one is aerosolized plasma treated water. The removal efficiency of pesticides from food surface was studied using HPLC. Washing with plasma treated water, aerosolized plasma treated water and chlorine water shows minimum 72% to maximum 87 % reduction for 4 min treatment irrespective to the types of food items and the types of pesticides sample, in case of soaking and bubbling the reduction is 8% to 48%. Washing with plasma treated water, aerosolized plasma treated water and chlorine water shows somewhat similar reduction ability which is significantly higher comparing to the soaking and bubbling washing system. The temperature effect of the washing systems was also evaluated; three different temperatures were set for the experiment, such as 22°C, 10°C and 4°C. Decreasing temperature from 22°C to 10°C shows a higher reduction in the case of washing with plasma and aerosolized plasma treated water, whereas an opposite trend was observed for the washing with chlorine water. Further temperature reduction from 10°C to 4°C does not show any significant reduction of pesticides, except for the washing with chlorine water. Chlorine water treatment shows lesser pesticide reduction with the decrease in temperature. The color changes of the treated sample were measured immediately and after one week to evaluate if there is any effect of washing with plasma treated water and with chlorine water. No significant color changes were observed for either of the washing systems, except for broccoli washing with chlorine water.

Keywords: chlorpyrifos, diazinone, pesticides, micro plasma

Procedia PDF Downloads 189