Search results for: strain specific markers
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9732

Search results for: strain specific markers

672 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 191
671 For Whom Is Legal Aid: A Critical Analysis of the State-Funded Legal Aid in Criminal Cases in Tajikistan

Authors: Umeda Junaydova

Abstract:

Legal aid is a key element of access to justice. According to UN Principles and Guidelines on Access to Legal Aid in Criminal Justice Systems, state members bear the obligation to put in place accessible, effective, sustainable, and credible legal aid systems. Regarding this obligation, developing countries, such as Tajikistan, faced challenges in terms of financing this system. Thus, many developed nations have launched rule-of-law programs to support these states and ensure access to justice for all. Following independence from the Soviet Union, Tajikistan committed to introducing the rule of law and providing access to justice. This newly established country was weak, and the sudden outbreak of civil war aggravated the situation even more. The country needed external support and opened its door to attract foreign donors to assist it in its way to development. In 2015, Tajikistan, with the financial support of development partners, was able to establish a state-funded legal aid system that provides legal assistance to vulnerable and marginalized populations, including in criminal cases. In the beginning, almost the whole system was financed from donor funds; by that time, the contribution of the government gradually increased, and currently, it covers 80% of the total budget. All these governments' actions toward ensuring access to criminal legal aid for disadvantaged groups look promising; however, the reality is completely different. Currently, not all disadvantaged people are covered by these services, and their cases are most of the time considered without appropriate defense, which leads to violation of fundamental human rights. This research presents a comprehensive exploration of the interplay between donor assistance and the effectiveness of legal aid services in Tajikistan, with a specific focus on criminal cases involving vulnerable groups, such as women and children. In the context of Tajikistan, this study addresses a pressing concern: despite substantial financial support from international donors, state-funded legal aid services often fall short of meeting the needs of poor and vulnerable populations. The study delves into the underlying complexities of this issue and examines the structural, operational, and systemic challenges faced by legal aid providers, shedding light on the factors contributing to the ineffectiveness of legal aid services. Furthermore, it seeks to identify the root causes of these issues, revealing the barriers that hinder the delivery of adequate legal aid services. The research adopts a socio-legal methodology to ensure an appropriate combination of multiple methodologies. The findings of this research hold significant implications for both policymakers and practitioners, offering insights into the enhancement of legal aid services and access to justice for disadvantaged and marginalized populations in Tajikistan. By addressing these pressing questions, this study aims to fill the gap in legal literature and contribute to the development of a more equitable and efficient legal aid system that better serves the needs of the most vulnerable members of society.

Keywords: access to justice, legal aid, rule of law, rights for council

Procedia PDF Downloads 50
670 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process

Authors: Qi Wang

Abstract:

With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.

Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle

Procedia PDF Downloads 5
669 Temperature Dependence of the Optoelectronic Properties of InAs(Sb)-Based LED Heterostructures

Authors: Antonina Semakova, Karim Mynbaev, Nikolai Bazhenov, Anton Chernyaev, Sergei Kizhaev, Nikolai Stoyanov

Abstract:

At present, heterostructures are used for fabrication of almost all types of optoelectronic devices. Our research focuses on the optoelectronic properties of InAs(Sb) solid solutions that are widely used in fabrication of light emitting diodes (LEDs) operating in middle wavelength infrared range (MWIR). This spectral range (2-6 μm) is relevant for laser diode spectroscopy of gases and molecules, for systems for the detection of explosive substances, medical applications, and for environmental monitoring. The fabrication of MWIR LEDs that operate efficiently at room temperature is mainly hindered by the predominance of non-radiative Auger recombination of charge carriers over the process of radiative recombination, which makes practical application of LEDs difficult. However, non-radiative recombination can be partly suppressed in quantum-well structures. In this regard, studies of such structures are quite topical. In this work, electroluminescence (EL) of LED heterostructures based on InAs(Sb) epitaxial films with the molar fraction of InSb ranging from 0 to 0.09 and multi quantum-well (MQW) structures was studied in the temperature range 4.2-300 K. The growth of the heterostructures was performed by metal-organic chemical vapour deposition on InAs substrates. On top of the active layer, a wide-bandgap InAsSb(Ga,P) barrier was formed. At low temperatures (4.2-100 K) stimulated emission was observed. As the temperature increased, the emission became spontaneous. The transition from stimulated emission to spontaneous one occurred at different temperatures for structures with different InSb contents in the active region. The temperature-dependent carrier lifetime, limited by radiative recombination and the most probable Auger processes (for the materials under consideration, CHHS and CHCC), were calculated within the framework of the Kane model. The effect of various recombination processes on the carrier lifetime was studied, and the dominant role of Auger processes was established. For MQW structures quantization energies for electrons, light and heavy holes were calculated. A characteristic feature of the experimental EL spectra of these structures was the presence of peaks with energy different from that of calculated optical transitions between the first quantization levels for electrons and heavy holes. The obtained results showed strong effect of the specific electronic structure of InAsSb on the energy and intensity of optical transitions in nanostructures based on this material. For the structure with MQWs in the active layer, a very weak temperature dependence of EL peak was observed at high temperatures (>150 K), which makes it attractive for fabricating temperature-resistant gas sensors operating in the middle-infrared range.

Keywords: Electroluminescence, InAsSb, light emitting diode, quantum wells

Procedia PDF Downloads 212
668 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes

Authors: Alicia Ettlin

Abstract:

Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.

Keywords: dualism, embodiment, mind, neoliberalism

Procedia PDF Downloads 163
667 An Algebraic Geometric Imaging Approach for Automatic Dairy Cow Body Condition Scoring System

Authors: Thi Thi Zin, Pyke Tin, Ikuo Kobayashi, Yoichiro Horii

Abstract:

Today dairy farm experts and farmers have well recognized the importance of dairy cow Body Condition Score (BCS) since these scores can be used to optimize milk production, managing feeding system and as an indicator for abnormality in health even can be utilized to manage for having healthy calving times and process. In tradition, BCS measures are done by animal experts or trained technicians based on visual observations focusing on pin bones, pin, thurl and hook area, tail heads shapes, hook angles and short and long ribs. Since the traditional technique is very manual and subjective, the results can lead to different scores as well as not cost effective. Thus this paper proposes an algebraic geometric imaging approach for an automatic dairy cow BCS system. The proposed system consists of three functional modules. In the first module, significant landmarks or anatomical points from the cow image region are automatically extracted by using image processing techniques. To be specific, there are 23 anatomical points in the regions of ribs, hook bones, pin bone, thurl and tail head. These points are extracted by using block region based vertical and horizontal histogram methods. According to animal experts, the body condition scores depend mainly on the shape structure these regions. Therefore the second module will investigate some algebraic and geometric properties of the extracted anatomical points. Specifically, the second order polynomial regression is employed to a subset of anatomical points to produce the regression coefficients which are to be utilized as a part of feature vector in scoring process. In addition, the angles at thurl, pin, tail head and hook bone area are computed to extend the feature vector. Finally, in the third module, the extracted feature vectors are trained by using Markov Classification process to assign BCS for individual cows. Then the assigned BCS are revised by using multiple regression method to produce the final BCS score for dairy cows. In order to confirm the validity of proposed method, a monitoring video camera is set up at the milk rotary parlor to take top view images of cows. The proposed method extracts the key anatomical points and the corresponding feature vectors for each individual cows. Then the multiple regression calculator and Markov Chain Classification process are utilized to produce the estimated body condition score for each cow. The experimental results tested on 100 dairy cows from self-collected dataset and public bench mark dataset show very promising with accuracy of 98%.

Keywords: algebraic geometric imaging approach, body condition score, Markov classification, polynomial regression

Procedia PDF Downloads 157
666 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors

Authors: Navid Kaboudi, Ali Shayanfar

Abstract:

Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.

Keywords: logistic regression, breastfeeding, descriptors, penetration

Procedia PDF Downloads 71
665 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic

Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak

Abstract:

Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.

Keywords: business value, financial ratios, performance measurement, value drivers

Procedia PDF Downloads 222
664 Impact of 6-Week Brain Endurance Training on Cognitive and Cycling Performance in Highly Trained Individuals

Authors: W. Staiano, S. Marcora

Abstract:

Introduction: It has been proposed that acute negative effect of mental fatigue (MF) could potentially become a training stimulus for the brain (Brain endurance training (BET)) to adapt and improve its ability to attenuate MF states during sport competitions. Purpose: The aim of this study was to test the efficacy of 6 weeks of BET on cognitive and cycling tests in a group of well-trained subjects. We hypothesised that combination of BET and standard physical training (SPT) would increase cognitive capacity and cycling performance by reducing rating of perceived exertion (RPE) and increase resilience to fatigue more than SPT alone. Methods: In a randomized controlled trial design, 26 well trained participants, after a familiarization session, cycled to exhaustion (TTE) at 80% peak power output (PPO) and, after 90 min rest, at 65% PPO, before and after random allocation to a 6 week BET or active placebo control. Cognitive performance was measured using 30 min of STROOP coloured task performed before cycling performance. During the training, BET group performed a series of cognitive tasks for a total of 30 sessions (5 sessions per week) with duration increasing from 30 to 60 min per session. Placebo engaged in a breathing relaxation training. Both groups were monitored for physical training and were naïve to the purpose of the study. Physiological and perceptual parameters of heart rate, lactate (LA) and RPE were recorded during cycling performances, while subjective workload (NASA TLX scale) was measured during the training. Results: Group (BET vs. Placebo) x Test (Pre-test vs. Post-test) mixed model ANOVA’s revealed significant interaction for performance at 80% PPO (p = .038) or 65% PPO (p = .011). In both tests, groups improved their TTE performance; however, BET group improved significantly more compared to placebo. No significant differences were found for heart rate during the TTE cycling tests. LA did not change significantly at rest in both groups. However, at completion of 65% TTE, it was significantly higher (p = 0.043) in the placebo condition compared to BET. RPE measured at ISO-time in BET was significantly lower (80% PPO, p = 0.041; 65% PPO p= 0.021) compared to placebo. Cognitive results in the STROOP task showed that reaction time in both groups decreased at post-test. However, BET decreased significantly (p = 0.01) more compared to placebo despite no differences accuracy. During training sessions, participants in the BET showed, through NASA TLX questionnaires, constantly significantly higher (p < 0.01) mental demand rates compared to placebo. No significant differences were found for physical demand. Conclusion: The results of this study provide evidences that combining BET and SPT seems to be more effective than SPT alone in increasing cognitive and cycling performance in well trained endurance participants. The cognitive overload produced during the 6-week training of BET can induce a reduction in perception of effort at a specific power, and thus improving cycling performance. Moreover, it provides evidence that including neurocognitive interventions will benefit athletes by increasing their mental resilience, without affecting their physical training load and routine.

Keywords: cognitive training, perception of effort, endurance performance, neuro-performance

Procedia PDF Downloads 119
663 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory

Authors: Xiaochen Mu

Abstract:

Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.

Keywords: data protection, property rights, intellectual property, Big data

Procedia PDF Downloads 39
662 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂

Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine

Abstract:

Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).

Keywords: devulcanization, recycling, rubber, waste

Procedia PDF Downloads 385
661 Waste Burial to the Pressure Deficit Areas in the Eastern Siberia

Authors: L. Abukova, O. Abramova, A. Goreva, Y. Yakovlev

Abstract:

Important executive decisions on oil and gas production stimulation in Eastern Siberia have been recently taken. There are unique and large fields of oil, gas, and gas-condensate in Eastern Siberia. The Talakan, Koyumbinskoye, Yurubcheno-Tahomskoye, Kovykta, Chayadinskoye fields are supposed to be developed first. It will result in an abrupt increase in environmental load on the nature of Eastern Siberia. In Eastern Siberia, the introduction of ecological imperatives in hydrocarbon production is still realistic. Underground water movement is the one of the most important factors of the ecosystems condition management. Oil and gas production is associated with the forced displacement of huge water masses, mixing waters of different composition, and origin that determines the extent of anthropogenic impact on water drive systems and their protective reaction. An extensive hydrogeological system of the depression type is identified in the pre-salt deposits here. Pressure relieve here is steady up to the basement. The decrease of the hydrodynamic potential towards the basement with such a gradient resulted in reformation of the fields in process of historical (geological) development of the Nepsko-Botuobinskaya anteclise. The depression hydrodynamic systems are characterized by extremely high isolation and can only exist under such closed conditions. A steady nature of water movement due to a strictly negative gradient of reservoir pressure makes it quite possible to use environmentally-harmful liquid substances instead of water. Disposal of the most hazardous wastes is the most expedient in the deposits of the crystalline basement in certain structures distant from oil and gas fields. The time period for storage of environmentally-harmful liquid substances may be calculated by means of the geological time scales ensuring their complete prevention from releasing into environment or air even during strong earthquakes. Disposal of wastes of chemical and nuclear industries is a matter of special consideration. The existing methods of storage and disposal of wastes are very expensive. The methods applied at the moment for storage of nuclear wastes at the depth of several meters, even in the most durable containers, constitute a potential danger. The enormous size of the depression system of the Nepsko-Botuobinskaya anteclise makes it possible to easily identify such objects at the depth below 1500 m where nuclear wastes will be stored indefinitely without any environmental impact. Thus, the water drive system of the Nepsko-Botuobinskaya anteclise is the ideal object for large-volume injection of environmentally harmful liquid substances even if there are large oil and gas accumulations in the subsurface. Specific geological and hydrodynamic conditions of the system allow the production of hydrocarbons from the subsurface simultaneously with the disposal of industrial wastes of oil and gas, mining, chemical, and nuclear industries without any environmental impact.

Keywords: Eastern Siberia, formation pressure, underground water, waste burial

Procedia PDF Downloads 259
660 Evaluating Impact of Teacher Professional Development Program on Students’ Learning

Authors: S. C. Lin, W. W. Cheng, M. S. Wu

Abstract:

This study attempted to investigate the connection between teacher professional development program and students’ Learning. This study took Readers’ Theater Teaching Program (RTTP) for professional development as an example to inquiry how participants apply their new knowledge and skills learned from RTTP to their teaching practice and how the impact influence students learning. The goals of the RTTP included: 1) to enhance teachers RT content knowledge; 2) to implement RT instruction in teachers’ classrooms in response to their professional development. 2) to improve students’ ability of reading fluency in professional development teachers’ classrooms. This study was a two-year project. The researchers applied mixed methods to conduct this study including qualitative inquiry and one-group pretest-posttest experimental design. In the first year, this study focused on designing and implementing RTTP and evaluating participants’ satisfaction of RTTP, what they learned and how they applied it to design their English reading curriculum. In the second year, the study adopted quasi-experimental design approach and evaluated how participants RT instruction influenced their students’ learning, including English knowledge, skill, and attitudes. The participants in this study composed two junior high school English teachers and their students. Data were collected from a number of different sources including teaching observation, semi-structured interviews, teaching diary, teachers’ professional development portfolio, Pre/post RT content knowledge tests, teacher survey, and students’ reading fluency tests. To analyze the data, both qualitative and quantitative data analysis were used. Qualitative data analysis included three stages: organizing data, coding data, and analyzing and interpreting data. Quantitative data analysis included descriptive analysis. The results indicated that average percentage of correct on pre-tests in RT content knowledge assessment was 40.75% with two teachers ranging in prior knowledge from 35% to 46% in specific RT content. Post-test RT content scores ranged from 70% to 82% correct with an average score of 76.50%. That gives teachers an average gain of 35.75% in overall content knowledge as measured by these pre/post exams. Teachers’ pre-test scores were lowest in script writing and highest in performing. Script writing was also the content area that showed the highest gains in content knowledge. Moreover, participants hold a positive attitude toward RTTP. They recommended that the approach of professional learning community, which was applied in RTTP was benefit to their professional development. Participants also applied the new skills and knowledge which they learned from RTTP to their practices. The evidences from this study indicated that RT English instruction significantly influenced students’ reading fluency and classroom climate. The result indicated that all of the experimental group students had a big progress in reading fluency after RT instruction. The study also found out several obstacles. Suggestions were also made.

Keywords: teacher’s professional development, program evaluation, readers’ theater, english reading instruction, english reading fluency

Procedia PDF Downloads 398
659 Automatic Identification and Classification of Contaminated Biodegradable Plastics using Machine Learning Algorithms and Hyperspectral Imaging Technology

Authors: Nutcha Taneepanichskul, Helen C. Hailes, Mark Miodownik

Abstract:

Plastic waste has emerged as a critical global environmental challenge, primarily driven by the prevalent use of conventional plastics derived from petrochemical refining and manufacturing processes in modern packaging. While these plastics serve vital functions, their persistence in the environment post-disposal poses significant threats to ecosystems. Addressing this issue necessitates approaches, one of which involves the development of biodegradable plastics designed to degrade under controlled conditions, such as industrial composting facilities. It is imperative to note that compostable plastics are engineered for degradation within specific environments and are not suited for uncontrolled settings, including natural landscapes and aquatic ecosystems. The full benefits of compostable packaging are realized when subjected to industrial composting, preventing environmental contamination and waste stream pollution. Therefore, effective sorting technologies are essential to enhance composting rates for these materials and diminish the risk of contaminating recycling streams. In this study, it leverage hyperspectral imaging technology (HSI) coupled with advanced machine learning algorithms to accurately identify various types of plastics, encompassing conventional variants like Polyethylene terephthalate (PET), Polypropylene (PP), Low density polyethylene (LDPE), High density polyethylene (HDPE) and biodegradable alternatives such as Polybutylene adipate terephthalate (PBAT), Polylactic acid (PLA), and Polyhydroxyalkanoates (PHA). The dataset is partitioned into three subsets: a training dataset comprising uncontaminated conventional and biodegradable plastics, a validation dataset encompassing contaminated plastics of both types, and a testing dataset featuring real-world packaging items in both pristine and contaminated states. Five distinct machine learning algorithms, namely Partial Least Squares Discriminant Analysis (PLS-DA), Support Vector Machine (SVM), Convolutional Neural Network (CNN), Logistic Regression, and Decision Tree Algorithm, were developed and evaluated for their classification performance. Remarkably, the Logistic Regression and CNN model exhibited the most promising outcomes, achieving a perfect accuracy rate of 100% for the training and validation datasets. Notably, the testing dataset yielded an accuracy exceeding 80%. The successful implementation of this sorting technology within recycling and composting facilities holds the potential to significantly elevate recycling and composting rates. As a result, the envisioned circular economy for plastics can be established, thereby offering a viable solution to mitigate plastic pollution.

Keywords: biodegradable plastics, sorting technology, hyperspectral imaging technology, machine learning algorithms

Procedia PDF Downloads 79
658 Polymer Dispersed Liquid Crystals Based on Poly Vinyl Alcohol Boric Acid Matrix

Authors: Daniela Ailincai, Bogdan C. Simionescu, Luminita Marin

Abstract:

Polymer dispersed liquid crystals (PDLC) represent an interesting class of materials which combine the ability of polymers to form films and their mechanical strength with the opto-electronic properties of liquid crystals. The proper choice of the two components - the liquid crystal and the polymeric matrix - leads to materials suitable for a large area of applications, from electronics to biomedical devices. The objective of our work was to obtain PDLC films with potential applications in the biomedical field, using poly vinyl alcohol boric acid (PVAB) as a polymeric matrix for the first time. Presenting all the tremendous properties of poly vinyl alcohol (such as: biocompatibility, biodegradability, water solubility, good chemical stability and film forming ability), PVAB brings the advantage of containing the electron deficient boron atom, and due to this, it should promote the liquid crystal anchoring and a narrow liquid crystal droplets polydispersity. Two different PDLC systems have been obtained, by the use of two liquid crystals, a nematic commercial one: 4-cyano-4’-penthylbiphenyl (5CB) and a new smectic liquid crystal, synthesized by us: buthyl-p-[p’-n-octyloxy benzoyloxy] benzoate (BBO). The PDLC composites have been obtained by the encapsulation method, working with four different ratios between the polymeric matrix and the liquid crystal, from 60:40 to 90:10. In all cases, the composites were able to form free standing, flexible films. Polarized light microscopy, scanning electron microscopy, differential scanning calorimetry, RAMAN- spectroscopy and the contact angle measurements have been performed, in order to characterize the new composites. The new smectic liquid crystal has been characterized using 1H-NMR and single crystal X-ray diffraction and its thermotropic behavior has been established using differential scanning calorimetry and polarized light microscopy. The polarized light microscopy evidenced the formation of round birefringent droplets, anchored homeotropic in the first case and planar in the second, with a narrow dimensional polydispersity, especially for the PDLC containing the largest amount of liquid crystal, fact evidenced by SEM, also. The obtained values for the water to air contact angle showed that the composites have a proper hydrophilic-hydrophobic balance, making them potential candidates for bioapplications. More than this, our studies demonstrated that the water to air contact angle varies as a function of PVAB matrix crystalinity degree, which can be controled as a function of time. This fact allowed us to conclude that the use of PVAB as matrix for PDLCs obtaining offers the possibility to modulate their properties for specific applications.

Keywords: 4-cyano-4’-penthylbiphenyl, buthyl-p-[p’-n-octyloxy benzoyloxy] benzoate, contact angle, polymer dispersed liquid crystals, poly vinyl alcohol boric acid

Procedia PDF Downloads 450
657 Exploring Drivers and Barriers to Environmental Supply Chain Management in the Pharmaceutical Industry of Ghana

Authors: Gifty Kumadey, Albert Tchey Agbenyegah

Abstract:

(i) Overview and research goal(s): This study aims to address research gaps in the Ghanaian pharmaceutical industry by examining the impact of environmental supply chain management (ESCM) practices on environmental and operational performance. Previous studies have provided inconclusive evidence on the relationship between ESCM practices and environmental and operational performance. The research aims to provide a clearer understanding of the impact of ESCM practices on environmental and operational performance in the context of the Ghanaian pharmaceutical industry. Limited research has been conducted on ESCM practices in developing countries, particularly in Africa. The study aims to bridge this gap by examining the drivers and barriers specific to the pharmaceutical industry in Ghana. The research aims to analyze the impact of ESCM practices on the achievement of Sustainable Development Goals (SDGs) in the Ghanaian pharmaceutical industry, focusing on SDGs 3, 12, 13, and 17. It also explores the potential for partnerships and collaborations to advance ESCM practices in the pharmaceutical industry. The research hypotheses suggest that pressure from stakeholder positively influences the adoption of ESCM practices in the Ghanaian pharmaceutical industry. By addressing these goals, the study aims to contribute to sustainable development initiatives and offer practical recommendations to enhance ESCM A practices in the industry. (ii) Research methods and data: This study uses a quantitative research design to examine the drivers and barriers to environmental supply chain management in the pharmaceutical industry in Accra.The sample size is approximately 150 employees, with senior and middle-level managers from pharmaceutical industry of Ghana. A purposive sampling technique is used to select participants with relevant knowledge and experience in environmental supply chain management. Data will be collected using a structured questionnaire using Likert scale responses. Descriptive statistics will be used to analyze the data and provide insights into current practices and their impact on environmental and operational performance. (iii) Preliminary results and conclusions: Main contributions: Identifying drivers/barriers to ESCM in Ghana's pharmaceutical industry, evaluating current ESCM practices, examining impact on performance, providing practical insights, contributing to knowledge on ESCM in Ghanaian context. The research contributes to SDGs 3, 9, and 12 by promoting sustainable practices and responsible consumption in the industry. The study found that government rules and regulations are the most critical drivers for ESCM adoption, with senior managers playing a significant role. However, employee and competitor pressures have a lesser impact. The industry has made progress in implementing certain ESCM practices, but there is room for improvement in areas like green distribution and reverse logistics. The study emphasizes the importance of government support, management engagement, and comprehensive implementation of ESCM practices in the industry. Future research should focus on overcoming barriers and challenges to effective ESCM implementation.

Keywords: environmental supply chain, sustainable development goal, ghana pharmaceutical industry, government regulations

Procedia PDF Downloads 92
656 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment

Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov

Abstract:

Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.

Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity

Procedia PDF Downloads 187
655 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 107
654 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars

Authors: Tobias Schramm, Günther Prokop

Abstract:

In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.

Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber

Procedia PDF Downloads 9
653 Causes and Consequences of Intuitive Animal Communication: A Case Study at Panthera Africa

Authors: Cathrine Scharning Cornwall-Nyquist, David Rafael Vaz Fernandes

Abstract:

Since its origins, mankind has been dreaming of communicating directly with other animals. Past civilizations interacted on different levels with other species and recognized them in their rituals and daily activities. However, recent scientific developments have limited the ability of humans to consider deeper levels of interaction beyond observation and/or physical behavior. In recent years, animal caretakers and facilities such as sanctuaries or rescue centers have been introducing new techniques based on intuition. Most of those initiatives are related to specific cases, such as the incapacity to understand an animal’s behavior. Respected organizations also include intuitive animal communication (IAC) sessions to follow up on past interventions with their animals. Despite the lack of credibility of this discipline, some animal caring structures have opted to integrate IAC into their daily routines and approaches to animal welfare. At this stage, animal communication will be generally defined as the ability of humans to communicate with animals on an intuitive level. The trend in the field remains to be explored. The lack of theory and previous research urges the scientific community to improve the description of the phenomenon and its consequences. Considering the current scenario, qualitative approaches may become a suitable pathway to explore this topic. The purpose of this case study is to explore the beliefs behind and the consequences of an approach based on intuitive animal communication techniques for Panthera Africa (PA), an ethical sanctuary located in South Africa. Due to their personal experience, the Sanctuary’s founders have developed a philosophy based on IAC while respecting the world's highest standards for big cat welfare. Their dual approach is reflected in their rescues, daily activities, and healing animals’ trauma. The case study's main research questions will be: (i) Why do they choose to apply IAC in their work? (ii) What consequences to their activities do IAC bring? (iii) What effects do IAC techniques bring in their interactions with the outside world? Data collection will be gathered on-site via: (i) Complete participation (field notes); (ii) Semi-structured interviews (audio transcriptions); (iii) Document analysis (internal procedures and policies); (iv) Audio-visual material (communication with third parties). The main researcher shall become an active member of the Sanctuary during a 30-day period and have full access to the site. Access to documents and audio-visual materials will be granted on a request basis. Interviews are expected to be held with PA founders and staff members and with IAC practitioners related to the facility. The information gathered shall enable the researcher to provide an extended description of the phenomenon and explore its internal and external consequences for Panthera Africa.

Keywords: animal welfare, intuitive animal communication, Panthera Africa, rescue

Procedia PDF Downloads 92
652 Providing Support On-Time: Need to Establish De-Radicalization Hotlines

Authors: Ashir Ahmed

Abstract:

Peacekeeping is a collective responsibility of governments, law enforcement agencies, communities, families, and individuals. Moreover, the complex nature of peacekeeping activities requires a holistic and collaborative approach where various community sectors work together to form collective strategies that are likely to be more effective than strategies designed and delivered in isolation. Similarly, it is important to learn from past programs to evaluate the initiatives that have worked well and the areas that need further improvement. Review of recent peacekeeping initiatives suggests that there have been tremendous efforts and resources put in place to deal with the emerging threat of terrorism, radicalization and violent extremism through number of de-radicalization programs. Despite various attempts in designing and delivering successful programs for deradicalization, the threat of people being radicalized is growing more than ever before. This research reviews the prominent de-radicalization programs to draw an understanding of their strengths and weaknesses. Some of the weaknesses in the existing programs include. Inaccessibility: Limited resources, geographical location of potential participants (for offline programs), inaccessibility or inability to use various technologies (for online programs) makes it difficult for people to participate in de-radicalization programs. Timeliness: People might need to wait for a program on a set date/time to get the required information and to get their questions answered. This is particularly true for offline programs. Lack of trust: The privacy issues and lack of trust between participants and program organizers are another hurdle in the success of de-radicalization programs. The fear of sharing participants information with organizations (such as law enforcement agencies) without their consent led them not to participate in these programs. Generalizability: Majority of these programs are very generic in nature and do not cater the specific needs of an individual. Participants in these programs may feel that the contents are irrelevant to their individual situations and hence feel disconnected with purpose of the programs. To address the above-mentioned weaknesses, this research developed a framework that recommends some improvements in de-radicalization programs. One of the recommendations is to offer 24/7, secure, private and online hotline (also referred as helpline) for the people who have any question, concern or situation to discuss with someone who is qualified (a counsellor) to deal with people who are vulnerable to be radicalized. To make these hotline services viable and sustainable, the existing organizations offering support for depression, anxiety or suicidal ideation could additionally host these services. These helplines should be available via phone, the internet, social media and in-person. Since these services will be embedded within existing and well-known services, they would likely to get more visibility and promotion. The anonymous and secure conversation between a person and a counsellor would ensure that a person can discuss the issues without being afraid of information sharing with any third party – without his/her consent. The next stage of this project would include the operationalization of the framework by collaborating with other organizations to host de-radicalization hotlines and would assess the effectiveness of such initiatives.

Keywords: de-radicalization, framework, hotlines, peacekeeping

Procedia PDF Downloads 214
651 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
650 European Commission Radioactivity Environmental Monitoring Database REMdb: A Law (Art. 36 Euratom Treaty) Transformed in Environmental Science Opportunities

Authors: M. Marín-Ferrer, M. A. Hernández, T. Tollefsen, S. Vanzo, E. Nweke, P. V. Tognoli, M. De Cort

Abstract:

Under the terms of Article 36 of the Euratom Treaty, European Union Member States (MSs) shall periodically communicate to the European Commission (EC) information on environmental radioactivity levels. Compilations of the information received have been published by the EC as a series of reports beginning in the early 1960s. The environmental radioactivity results received from the MSs have been introduced into the Radioactivity Environmental Monitoring database (REMdb) of the Institute for Transuranium Elements of the EC Joint Research Centre (JRC) sited in Ispra (Italy) as part of its Directorate General for Energy (DG ENER) support programme. The REMdb brings to the scientific community dealing with environmental radioactivity topics endless of research opportunities to exploit the near 200 millions of records received from MSs containing information of radioactivity levels in milk, water, air and mixed diet. The REM action was created shortly after Chernobyl crisis to support the EC in its responsibilities in providing qualified information to the European Parliament and the MSs on the levels of radioactive contamination of the various compartments of the environment (air, water, soil). Hence, the main line of REM’s activities concerns the improvement of procedures for the collection of environmental radioactivity concentrations for routine and emergency conditions, as well as making this information available to the general public. In this way, REM ensures the availability of tools for the inter-communication and access of users from the Member States and the other European countries to this information. Specific attention is given to further integrate the new MSs with the existing information exchange systems and to assist Candidate Countries in fulfilling these obligations in view of their membership of the EU. Article 36 of the EURATOM treaty requires the competent authorities of each MS to provide regularly the environmental radioactivity monitoring data resulting from their Article 35 obligations to the EC in order to keep EC informed on the levels of radioactivity in the environment (air, water, milk and mixed diet) which could affect population. The REMdb has mainly two objectives: to keep a historical record of the radiological accidents for further scientific study, and to collect the environmental radioactivity data gathered through the national environmental monitoring programs of the MSs to prepare the comprehensive annual monitoring reports (MR). The JRC continues his activity of collecting, assembling, analyzing and providing this information to public and MSs even during emergency situations. In addition, there is a growing concern with the general public about the radioactivity levels in the terrestrial and marine environment, as well about the potential risk of future nuclear accidents. To this context, a clear and transparent communication with the public is needed. EURDEP (European Radiological Data Exchange Platform) is both a standard format for radiological data and a network for the exchange of automatic monitoring data. The latest release of the format is version 2.0, which is in use since the beginning of 2002.

Keywords: environmental radioactivity, Euratom, monitoring report, REMdb

Procedia PDF Downloads 443
649 Internal Concept of Integrated Health by Agrarian Society in Malagasy Highlands for the Last Century

Authors: O. R. Razanakoto, L. Temple

Abstract:

Living in a least developed country, the Malagasy society has a weak capacity to internalize progress, including health concerns. Since the arrival in the fifteenth century of Arabic script, called Sorabe, that was mainly dedicated to the aristocracy, until the colonial era beginning at the end of the nineteenth century and that has popularized the current usual script of the occidental civilization, the upcoming manuscripts that deal with apparent scientific or at least academic issue have been slowly established. So that, the Malagasy communities’ way of life is not well documented yet to allow a precise understanding of the major concerns, reason, and purpose of the existence of the farmers that compose them. A question arises, according to literature, how does Malagasy community that is dominated by agrarian society conceive the conservation of its wellbeing? This study aims to emphasize the scope and the limits of the « One Health » concept or of the Health Integrated Approach (HIA) that evolves at global scale, with regard to the specific context of local Malagasy smallholder farms. It is expected to identify how this society represents linked risks and the mechanisms between human health, animal health, plant health, and ecosystem health within the last 100 years. To do so, the framework to conduct systematic review for agricultural research has been deployed to access available literature. This task has been coupled with the reading of articles that are not indexed by online scientific search engine but that mention part of a history of agriculture and of farmers in Madagascar. This literature review has informed the interactions between human illnesses and those affecting animals and plants (breeded or wild) with any unexpected event (ecological or economic) that has modified the equilibrium of the ecosystem, or that has disturbed the livelihoods of agrarian communities. Besides, drivers that may either accentuate or attenuate the devasting effects of these illnesses and changes were revealed. The study has established that the reasons of human worries are not only physiological. Among the factors that regulate global health, food system and contemporary medicine have helped to the improvement of life expectancy from 55 to 63 years in Madagascar during the last 50 years. However, threats to global health are still occurring. New human or animal illnesses and livestock / plant pathology or enemies may also appear, whereas ancient illnesses that are supposed to have disappeared may be back. This study has highlighted how much important are the risks associated to the impact of unmanaged externalities that weaken community’s life. Many risks, and also solutions, come from abroad and have long term effects even though those happen as punctual event. Thus, a constructivist strategy is suggested to the « One Health » global concept throughout the record of local facts. This approach should facilitate the exploration of methodological pathways and the identification of relevant indicators for research related to HIA.

Keywords: agrarian system, health integrated approach, history, madagascar, resilience, risk

Procedia PDF Downloads 109
648 Explaining Motivation in Language Learning: A Framework for Evaluation and Research

Authors: Kim Bower

Abstract:

Evaluating and researching motivation in language learning is a complex and multi-faceted activity. Various models for investigating learner motivation have been proposed in the literature, but no one model supplies a complex and coherent model for investigating a range of motivational characteristics. Here, such a methodological framework, which includes exemplification of sources of evidence and potential methods of investigation, is proposed. The process model for the investigation of motivation within language learning settings proposed is based on a complex dynamic systems perspective that takes account of cognition and affects. It focuses on three overarching aspects of motivation: the learning environment, learner engagement and learner identities. Within these categories subsets are defined: the learning environment incorporates teacher, course and group specific aspects of motivation; learner engagement addresses the principal characteristics of learners' perceived value of activities, their attitudes towards language learning, their perceptions of their learning and engagement in learning tasks; and within learner identities, principal characteristics of self-concept and mastery of the language are explored. Exemplifications of potential sources of evidence in the model reflect the multiple influences within and between learner and environmental factors and the possible changes in both that may emerge over time. The model was initially developed as a framework for investigating different models of Content and Language Integrated Learning (CLIL) in contrasting contexts in secondary schools in England. The study, from which examples are drawn to exemplify the model, aimed to address the following three research questions: (1) in what ways does CLIL impact on learner motivation? (2) what are the main elements of CLIL that enhance motivation? and (3) to what extent might these be transferable to other contexts? This new model has been tried and tested in three locations in England and reported as case studies. Following an initial visit to each institution to discuss the qualitative research, instruments were developed according to the proposed model. A questionnaire was drawn up and completed by one group prior to a 3-day data collection visit to each institution, during which interviews were held with academic leaders, the head of the department, the CLIL teacher(s), and two learner focus groups of six-eight learners. Interviews were recorded and transcribed verbatim. 2-4 naturalistic observations of lessons were undertaken in each setting, as appropriate to the context, to provide colour and thereby a richer picture. Findings were subjected to an interpretive analysis by the themes derived from the process model and are reported elsewhere. The model proved to be an effective and coherent framework for planning the research, instrument design, data collection and interpretive analysis of data in these three contrasting settings, in which different models of language learning were in place. It is hoped that the proposed model, reported here together with exemplification and commentary, will enable teachers and researchers in a wide range of language learning contexts to investigate learner motivation in a systematic and in-depth manner.

Keywords: investigate, language-learning, learner motivation model, dynamic systems perspective

Procedia PDF Downloads 268
647 Assessment of Occupational Health and Safety Conditions of Health Care Workers in Barangay Health Centers in a Selected City in Metro Manila

Authors: Deinzel R. Uezono, Vivien Fe F. Fadrilan-Camacho, Bianca Margarita L. Medina, Antonio Domingo R. Reario, Trisha M. Salcedo, Luke Wesley P. Borromeo

Abstract:

The environment of health care workers is considered one of the most hazardous settings due to the nature of their work. In developing countries especially, the Philippines, this continues to be overlooked in terms of programs and services on occupational health and safety (OHS). One possible reason for this is the existing information gap on OHS which limits data comparability and impairs effective monitoring and assessment of interventions. To address this gap, there is a need to determine the current conditions of Filipino health care workers in their workplace. This descriptive cross-sectional study assessed the occupational health and safety conditions of health care workers in barangay health centers in a selected city in Metro Manila, Philippines by: (1) determining the hazards present in the workplace; (2) determining the most common self-reported medical problems; and (3) describing the elements of an OHS system based on the six building blocks of health system. Assessment was done through walkthrough survey, self-administered questionnaire, and key informant interview. Data analysis was done using Epi Info 7 and NVivo 11. Results revealed different health hazards present in the workplace particularly biological hazards (exposure to sick patients and infectious specimens), physical hazards (inadequate space and/or lighting), chemical hazards (toxic reagents and flammable chemicals), and ergonomic hazards (activities requiring repetitive motion and awkward posture). Additionally, safety hazards (improper capping of syringe and lack of fire safety provisions) were also observed. Meanwhile, the most commonly self-reported chronic diseases among health care workers (N=336) were hypertension (20.24%, n=68) and diabetes (12.50%, n=42). Top commonly self-reported symptoms were colds (66.07%, n=222), coughs (63.10%, n=212), headache (55.65%, n=187), and muscle pain (50.60%, n=170) while other diseases were influenza (16.96%, n=57) and UTI (15.48%, n=52). In terms of the elements of the OHS system, a general policy on occupational health and safety was found to be lacking and in effect, an absence of health and safety committee overseeing the implementing and monitoring of the policy. No separate budget specific for OHS programs and services was also found to be a limitation. As a result, no OHS personnel and trainings/seminar were identified. No established information system for OHS was in place. In conclusion, health and safety hazards were observed to be present across the barangay health centers visited in a selected city in Metro Manila. Medical conditions identified as most commonly self-reported were hypertension and diabetes for chronic diseases; colds, coughs, headache, and muscle pain for medical symptoms; and influenza and UTI for other diseases. As for the elements of the occupational health and safety system, there was a lack in the general components of the six building blocks of the health system.

Keywords: health hazards, occupational health and safety, occupational health and safety system, safety hazards

Procedia PDF Downloads 186
646 A Valid Professional Development Framework For Supporting Science Teachers In Relation To Inquiry-Based Curriculum Units

Authors: Fru Vitalis Akuma, Jenna Koenen

Abstract:

The science education community is increasingly calling for learning experiences that mirror the work of scientists. Although inquiry-based science education is aligned with these calls, the implementation of this strategy is a complex and daunting task for many teachers. Thus, policymakers and researchers have noted the need for continued teacher Professional Development (PD) in the enactment of inquiry-based science education, coupled with effective ways of reaching the goals of teacher PD. This is a complex problem for which educational design research is suitable. The purpose at this stage of our design research is to develop a generic PD framework that is valid as the blueprint of a PD program for supporting science teachers in relation to inquiry-based curriculum units. The seven components of the framework are the goal, learning theory, strategy, phases, support, motivation, and an instructional model. Based on a systematic review of the literature on effective (science) teacher PD, coupled with developer screening, we have generated a design principle per component of the PD framework. For example, as per the associated design principle, the goal of the framework is to provide science teachers with experiences in authentic inquiry, coupled with enhancing their competencies linked to the adoption, customization and design; then the classroom implementation and the revision of inquiry-based curriculum units. The seven design principles have allowed us to synthesize the PD framework, which, coupled with the design principles, are the preliminary outcomes of the current research. We are in the process of evaluating the content and construct validity of the framework, based on nine one-on-one interviews with experts in inquiry-based classroom and teacher learning. To this end, we have developed an interview protocol with the input of eight such experts in South Africa and Germany. Using the protocol, the expert appraisal of the PD framework will involve three experts from Germany, South Africa, and Cameroon, respectively. These countries, where we originate and/or work, provide a variety of inquiry-based science education contexts, making the countries suitable in the evaluation of the generic PD framework. Based on the evaluation, we will revise the framework and its seven design principles to arrive at the final outcomes of the current research. While the final content and construct a valid version of the framework will serve as an example of the needed ways through which effective inquiry-based science teacher PD may be achieved, the final design principles will be useful to researchers when transforming the framework for use in any specific educational context. For example, in our further research, we will transform the framework to one that is practical and effective in supporting inquiry-based practical work in resource-constrained physical sciences classrooms in South Africa. Researchers in other educational contexts may similarly consider the final framework and design principles in their work. Thus, our final outcomes will inform practice and research around the support of teachers to increase the incorporation of learning experiences that mirror the work of scientists in a worldwide manner.

Keywords: design principles, educational design research, evaluation, inquiry-based science education, professional development framework

Procedia PDF Downloads 149
645 The Readaptation of the Subscale 3 of the NLit-IT (Nutrition Literacy Assessment Instrument for Italian Subjects)

Authors: Virginia Vettori, Chiara Lorini, Vieri Lastrucci, Giulia Di Pisa, Alessia De Blasi, Sara Giuggioli, Guglielmo Bonaccorsi

Abstract:

The design of the Nutrition Literacy Assessment Instrument (NLit) responds to the need to provide a tool to adequately assess the construct of nutrition literacy (NL), which is strictly connected to the quality of the diet and nutritional health status. The NLit was originally developed and validated in the US context, and it was recently validated for Italian people too (NLit-IT), involving a sample of N = 74 adults. The results of the cross-cultural adaptation of the tool confirmed its validity since it was established that the level of NL contributed to predicting the level of adherence to the Mediterranean Diet (convergent validity). Additionally, results obtained proved that Internal Consistency and reliability of the NLit-IT were good (Cronbach’s alpha (ρT) = 0.78; 95% CI, 0.69–0.84; Intraclass Correlation Coefficient (ICC) = 0.68, 95% CI, 0.46–0.85). However, the Subscale 3 of the NLit-IT “Household Food Measurement” showed lower values of ρT and ICC (ρT = 0.27; 95% CI, 0.1–0.55; ICC = 0.19, 95% CI, 0.01–0.63) than the entire instrument. Subscale 3 includes nine items which are constituted by written questions and the corresponding pictures of the meals. In particular, items 2, 3, and 8 of Subscale 3 had the lowest level of correct answers. The purpose of the present study was to identify the factors that influenced the Internal Consistency and reliability of Subscale 3 of NLit-IT using the methodology of a focus group. A panel of seven experts was formed, involving professionals in the field of public health nutrition, dietetics, and health promotion and all of them were trained on the concepts of nutrition literacy and food appearance. A member of the group drove the discussion, which was oriented in the identification of the reasons for the low levels of reliability and Internal Consistency. The members of the group discussed the level of comprehension of the items and how they could be readapted. From the discussion, it emerges that the written questions were clear and easy to understand, but it was observed that the representations of the meal needed to be improved. Firstly, it has been decided to introduce a fork or a spoon as a reference dimension to better understand the dimension of the food portion (items 1, 4 and 8). Additionally, the flat plate of items 3 and 5 should be substituted with a soup plate because, in the Italian national context, it is common to eat pasta or rice on this kind of plate. Secondly, specific measures should be considered for some kind of foods such as the brick of yogurt instead of a cup of yogurt (items 1 and 4). Lastly, it has been decided to redo the photos of the meals basing on professional photographic techniques. In conclusion, we noted that the graphical representation of the items strictly influenced the level of participants’ comprehension of the questions; moreover, the research group agreed that the level of knowledge about nutrition and food portion size is low in the general population.

Keywords: nutritional literacy, cross cultural adaptation, misinformation, food design

Procedia PDF Downloads 170
644 The Asymptotic Hole Shape in Long Pulse Laser Drilling: The Influence of Multiple Reflections

Authors: Torsten Hermanns, You Wang, Stefan Janssen, Markus Niessen, Christoph Schoeler, Ulrich Thombansen, Wolfgang Schulz

Abstract:

In long pulse laser drilling of metals, it can be demonstrated that the ablation shape approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from ultra short pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in long pulse drilling of metals is identified, a model for the description of the asymptotic hole shape numerically implemented, tested and clearly confirmed by comparison with experimental data. The model assumes a robust process in that way that the characteristics of the melt flow inside the arising melt film does not change qualitatively by changing the laser or processing parameters. Only robust processes are technically controllable and thus of industrial interest. The condition for a robust process is identified by a threshold for the mass flow density of the assist gas at the hole entrance which has to be exceeded. Within a robust process regime the melt flow characteristics can be captured by only one model parameter, namely the intensity threshold. In analogy to USP ablation (where it is already known for a long time that the resulting hole shape results from a threshold for the absorbed laser fluency) it is demonstrated that in the case of robust long pulse ablation the asymptotic shape forms in that way that along the whole contour the absorbed heat flux density is equal to the intensity threshold. The intensity threshold depends on the special material and radiation properties and has to be calibrated be one reference experiment. The model is implemented in a numerical simulation which is called AsymptoticDrill and requires such a few amount of resources that it can run on common desktop PCs, laptops or even smart devices. Resulting hole shapes can be calculated within seconds what depicts a clear advantage over other simulations presented in literature in the context of industrial every day usage. Against this background the software additionally is equipped with a user-friendly GUI which allows an intuitive usage. Individual parameters can be adjusted using sliders while the simulation result appears immediately in an adjacent window. A platform independent development allow a flexible usage: the operator can use the tool to adjust the process in a very convenient manner on a tablet during the developer can execute the tool in his office in order to design new processes. Furthermore, at the best knowledge of the authors AsymptoticDrill is the first simulation which allows the import of measured real beam distributions and thus calculates the asymptotic hole shape on the basis of the real state of the specific manufacturing system. In this paper the emphasis is placed on the investigation of the effect of multiple reflections on the asymptotic hole shape which gain in importance when drilling holes with large aspect ratios.

Keywords: asymptotic hole shape, intensity threshold, long pulse laser drilling, robust process

Procedia PDF Downloads 213
643 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 528