Search results for: uncertain expected value
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3172

Search results for: uncertain expected value

172 A Model for Teaching Arabic Grammar in Light of the Common European Framework of Reference for Languages

Authors: Erfan Abdeldaim Mohamed Ahmed Abdalla

Abstract:

The complexity of Arabic grammar poses challenges for learners, particularly in relation to its arrangement, classification, abundance, and bifurcation. The challenge at hand is a result of the contextual factors that gave rise to the grammatical rules in question, as well as the pedagogical approach employed at the time, which was tailored to the needs of learners during that particular historical period. Consequently, modern-day students encounter this same obstacle. This requires a thorough examination of the arrangement and categorization of Arabic grammatical rules based on particular criteria, as well as an assessment of their objectives. Additionally, it is necessary to identify the prevalent and renowned grammatical rules, as well as those that are infrequently encountered, obscure and disregarded. This paper presents a compilation of grammatical rules that require arrangement and categorization in accordance with the standards outlined in the Common European Framework of Reference for Languages (CEFR). In addition to facilitating comprehension of the curriculum, accommodating learners' requirements, and establishing the fundamental competencies for achieving proficiency in Arabic, it is imperative to ascertain the conventions that language learners necessitate in alignment with explicitly delineated benchmarks such as the CEFR criteria. The aim of this study is to reduce the quantity of grammatical rules that are typically presented to non-native Arabic speakers in Arabic textbooks. This reduction is expected to enhance the motivation of learners to continue their Arabic language acquisition and to approach the level of proficiency of native speakers. The primary obstacle faced by learners is the intricate nature of Arabic grammar, which poses a significant challenge in the realm of study. The proliferation and complexity of regulations evident in Arabic language textbooks designed for individuals who are not native speakers is noteworthy. The inadequate organisation and delivery of the material create the impression that the grammar is being imparted to a student with the intention of memorising "Alfiyyat-Ibn-Malik." Consequently, the sequence of grammatical rules instruction was altered, with rules originally intended for later instruction being presented first and those intended for earlier instruction being presented subsequently. Students often focus on learning grammatical rules that are not necessarily required while neglecting the rules that are commonly used in everyday speech and writing. Non-Arab students are taught Arabic grammar chapters that are infrequently utilised in Arabic literature and may be a topic of debate among grammarians. The aforementioned findings are derived from the statistical analysis and investigations conducted by the researcher, which will be disclosed in due course of the research. To instruct non-Arabic speakers on grammatical rules, it is imperative to discern the most prevalent grammatical frameworks in grammar manuals and linguistic literature (study sample). The present proposal suggests the allocation of grammatical structures across linguistic levels, taking into account the guidelines of the CEFR, as well as the grammatical structures that are necessary for non-Arabic-speaking learners to generate a modern, cohesive, and comprehensible language.

Keywords: grammar, Arabic, functional, framework, problems, standards, statistical, popularity, analysis

Procedia PDF Downloads 59
171 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators

Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín

Abstract:

Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.

Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator

Procedia PDF Downloads 182
170 Transport of Inertial Finite-Size Floating Plastic Pollution by Ocean Surface Waves

Authors: Ross Calvert, Colin Whittaker, Alison Raby, Alistair G. L. Borthwick, Ton S. van den Bremer

Abstract:

Large concentrations of plastic have polluted the seas in the last half century, with harmful effects on marine wildlife and potentially to human health. Plastic pollution will have lasting effects because it is expected to take hundreds or thousands of years for plastic to decay in the ocean. The question arises how waves transport plastic in the ocean. The predominant motion induced by waves creates ellipsoid orbits. However, these orbits do not close, resulting in a drift. This is defined as Stokes drift. If a particle is infinitesimally small and the same density as water, it will behave exactly as the water does, i.e., as a purely Lagrangian tracer. However, as the particle grows in size or changes density, it will behave differently. The particle will then have its own inertia, the fluid will exert drag on the particle, because there is relative velocity, and it will rise or sink depending on the density and whether it is on the free surface. Previously, plastic pollution has all been considered to be purely Lagrangian. However, the steepness of waves in the ocean is small, normally about α = k₀a = 0.1 (where k₀ is the wavenumber and a is the wave amplitude), this means that the mean drift flows are of the order of ten times smaller than the oscillatory velocities (Stokes drift is proportional to steepness squared, whilst the oscillatory velocities are proportional to the steepness). Thus, the particle motion must have the forces of the full motion, oscillatory and mean flow, as well as a dynamic buoyancy term to account for the free surface, to determine whether inertia is important. To track the motion of a floating inertial particle under wave action requires the fluid velocities, which form the forcing, and the full equations of motion of a particle to be solved. Starting with the equation of motion of a sphere in unsteady flow with viscous drag. Terms can added then be added to the equation of motion to better model floating plastic: a dynamic buoyancy to model a particle floating on the free surface, quadratic drag for larger particles and a slope sliding term. Using perturbation methods to order the equation of motion into sequentially solvable parts allows a parametric equation for the transport of inertial finite-sized floating particles to be derived. This parametric equation can then be validated using numerical simulations of the equation of motion and flume experiments. This paper presents a parametric equation for the transport of inertial floating finite-size particles by ocean waves. The equation shows an increase in Stokes drift for larger, less dense particles. The equation has been validated using numerical solutions of the equation of motion and laboratory flume experiments. The difference in the particle transport equation and a purely Lagrangian tracer is illustrated using worlds maps of the induced transport. This parametric transport equation would allow ocean-scale numerical models to include inertial effects of floating plastic when predicting or tracing the transport of pollutants.

Keywords: perturbation methods, plastic pollution transport, Stokes drift, wave flume experiments, wave-induced mean flow

Procedia PDF Downloads 95
169 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles

Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo

Abstract:

Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.

Keywords: HRRP, NCTI, simulated/synthetic database, SVD

Procedia PDF Downloads 328
168 Drivetrain Comparison and Selection Approach for Armored Wheeled Hybrid Vehicles

Authors: Çağrı Bekir Baysal, Göktuğ Burak Çalık

Abstract:

Armored vehicles may have different traction layouts as a result of terrain capabilities and mobility needs. Two main categories of layouts can be separated as wheeled and tracked. Tracked vehicles have superior off-road capabilities but what they gain on terrain performance they lose on mobility front. Wheeled vehicles on the other hand do not have as good terrain capabilities as tracked vehicles but they have superior mobility capabilities such as top speed, range and agility with respect to tracked vehicles. Conventional armored vehicles employ a diesel ICE as main power source. In these vehicles ICE is mechanically connected to the powertrain. This determines the ICE rpm as a result of speed and torque requested by the driver. ICE efficiency changes drastically with torque and speed required and conventional vehicles suffer in terms of fuel consumption because of this. Hybrid electric vehicles employ at least one electric motor in order to improve fuel efficiency. There are different types of hybrid vehicles but main types are Series Hybrid, Parallel Hybrid and Series-Parallel Hybrid. These vehicles introduce an electric motor for traction and also can have a generator electric motor for range extending purposes. Having an electric motor as the traction power source brings the flexibility of either using the ICE as an alternative traction source while it is in efficient range or completely separating the ICE from traction and using it solely considering efficiency. Hybrid configurations have additional advantages for armored vehicles in addition to fuel efficiency. Heat signature, silent operation and prolonged stationary missions can be possible with the help of the high-power battery pack that will be present in the vehicle for hybrid drivetrain. Because of the reasons explained, hybrid armored vehicles are becoming a target area for military and also for vehicle suppliers. In order to have a better idea and starting point when starting a hybrid armored vehicle design, hybrid drivetrain configuration has to be selected after performing a trade-off study. This study has to include vehicle mobility simulations, integration level, vehicle level and performance level criteria. In this study different hybrid traction configurations possible for an 8x8 vehicle is compared using above mentioned criteria set. In order to compare hybrid traction configurations ease of application, cost, weight advantage, reliability, maintainability, redundancy and performance criteria have been used. Performance criteria points have been defined with the help of vehicle simulations and tests. Results of these simulations and tests also help determining required tractive power for an armored vehicle including conditions like trench and obstacle crossing, gradient climb. With the method explained in this study, each configuration is assigned a point for each criterion. This way, correct configuration can be selected objectively for every application. Also, key aspects of armored vehicles, mine protection and ballistic protection will be considered for hybrid configurations. Results are expected to vary for different types of vehicles but it is observed that having longitudinal differential locking capability improves mobility and having high motor count increases complexity in general.

Keywords: armored vehicles, electric drivetrain, electric mobility, hybrid vehicles

Procedia PDF Downloads 58
167 Concept of Tourist Village on Kampung Karaton of Karaton Kasunanan Surakarta, Central Java, Indonesia

Authors: Naniek Widayati Priyomarsono

Abstract:

Introduction: In beginning of Karaton formation, namely, era of Javanese kingdom town had the power region outside castle town (called as Mancanegara), settlement of karaton can function as “the space-between” and “space-defense”, besides it was one of components from governmental structure and karaton power at that time (internal servant/abdi dalem and sentana dalem). Upon the Independence of Indonesia in 1945 “Kingdom-City” converted its political status into part of democratic town managed by statutes based on the classification. The latter affects local culture hierarchy alteration due to the physical development and events. Dynamics of social economy activities in Kampung Karaton surrounded by buildings of Complex of Karaton Kasunanan ini, have impact on the urban system disturbed into the región. Also cultural region image fades away with the weak visual access from existant cultural artefacts. That development lacks of giving appreciation to the established region image providing identity of Karaton Kasunanan particularly and identity of Surakarta city in general. Method used is strategy of grounded theory research (research providing strong base of a theory). Research is focused on actors active and passive relevantly getting involved in change process of Karaton settlement. Data accumulated is “Investigation Focus” oriented on actors affecting that change either internal or external. Investigation results are coupled with field observation data, documentation, literature study, thus it takes accurate findings. Findings: Karaton village has potential products as attraction, possessing human resource support, strong motivation from society still living in that settlement, possessing facilities and means supports, tourism event-supporting facilities, cultural art institution, available fields or development area. Data analyzed: To get the expected result it takes restoration in social cultural development direction, and economy, with ways of: Doing social cultural development strategy, economy, and politics. To-do steps are program socialization of Karaton village as Tourism Village, economical development of local society, regeneration pattern, filtering, and selection of tourism development, integrated planning system development, development with persuasive approach, regulation, market mechanism, social cultural event sector development, political development for region activity sector. Summary: In case the restoration is done by getting society involved as subject of that settlement (active participation in the field), managed and packed interestingly and naturally with tourism-supporting facilities development, village of Karaton Kasunanan Surakarta is ready to receive visit of domestic and foreign tourists.

Keywords: karaton village, finding, restoration, economy, Indonesia

Procedia PDF Downloads 411
166 Key Findings on Rapid Syntax Screening Test for Children

Authors: Shyamani Hettiarachchi, Thilini Lokubalasuriya, Shakeela Saleem, Dinusha Nonis, Isuru Dharmaratne, Lakshika Udugama

Abstract:

Introduction: Late identification of language difficulties in children could result in long-term negative consequences for communication, literacy and self-esteem. This highlights the need for early identification and intervention for speech, language and communication difficulties. Speech and language therapy is a relatively new profession in Sri Lanka and at present, there are no formal standardized screening tools to assess language skills in Sinhala-speaking children. The development and validation of a short, accurate screening tool to enable the identification of children with syntactic difficulties in Sinhala is a current need. Aims: 1) To develop test items for a Sinhala Syntactic Structures (S3 Short Form) test on children aged between 3;0 to 5;0 years 2) To validate the test of Sinhala Syntactic Structures (S3 Short Form) on children aged between 3; 0 to 5; 0 years Methods: The Sinhala Syntactic Structures (S3 Short Form) was devised based on the Renfrew Action Picture Test. As Sinhala contains post-positions in contrast to English, the principles of the Renfrew Action Picture Test were followed to gain an information score and a grammar score but the test devised reflected the linguistic-specificity and complexity of Sinhala and the pictures were in keeping with the culture of the country. This included the dative case marker ‘to give something to her’ (/ejɑ:ʈə/ meaning ‘to her’), the instrumental case marker ‘to get something from’ (/ejɑ:gən/ meaning ‘from him’ or /gɑhən/ meaning ‘from the tree’), possessive noun (/ɑmmɑge:/ meaning ‘mother’s’ or /gɑhe:/ meaning ‘of the tree’ or /male:/ meaning ‘of the flower’) and plural markers (/bɑllɑ:/ bɑllo:/ meaning ‘dog/dogs’, /mɑlə/mɑl/ meaning ‘flower/flowers’, /gɑsə/gɑs/ meaning ‘tree/trees’ and /wɑlɑ:kulə/wɑlɑ:kulu/ meaning ‘cloud/clouds’). The picture targets included socio-culturally appropriate scenes of the Sri Lankan New Year celebration, elephant procession and the Buddhist ‘Wesak’ ceremony. The test was piloted with a group of 60 participants and necessary changes made. In phase 1, the test was administered to 100 Sinhala-speaking children aged between 3; 0 and 5; 0 years in one district. In this presentation on phase 2, the test was administered to another 100 Sinhala-speaking children aged between 3; 0 to 5; 0 in three districts. In phase 2, the selection of the test items was assessed via measures of content validity, test-retest reliability and inter-rater reliability. The age of acquisition of each syntactic structure was determined using content and grammar scores which were statistically analysed using t-tests and one-way ANOVAs. Results: High percentage agreement was found on test-retest reliability on content validity and Pearson correlation measures and on inter-rater reliability. As predicted, there was a statistically significant influence of age on the production of syntactic structures at p<0.05. Conclusions: As the target test items included generated the information and the syntactic structures expected, the test could be used as a quick syntactic screening tool with preschool children.

Keywords: Sinhala, screening, syntax, language

Procedia PDF Downloads 319
165 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization

Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo

Abstract:

Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.

Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy

Procedia PDF Downloads 119
164 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 90
163 Informalization and Feminization of Labour Force in the Context of Globalization of Production: Case Study of Women Migrant Workers in Kinfra Apparel Park of India

Authors: Manasi Mahanty

Abstract:

In the current phase of globalization, the mobility of capital facilitates outsourcing and subcontracting of production processes to the developing economies for cheap and flexible labour force. In such process, the globalization of production networks operates at multi-locational points within the nation. Under the new quota regime in the globalization period, the Indian manufacturing exporters came under the influence of corporate buyers and large retailers from the importing countries. As part of such process, the garment manufacturing sector is expected to create huge employment opportunities and to expand the export market in the country. While following these, expectations, the apparel and garment industries mostly target to hire female migrant workers with a purpose of establishing more flexible industrial relations through the casual nature of employment contract. It leads to an increasing women’s participation in the labour market as well as the rise in precarious forms of female paid employment. In the context, the main objective of the paper is to understand the wider dynamics of globalization of production and its link with informalization, feminization of labour force and internal migration process of the country. For this purpose, the study examines the changing labour relations in the KINFRA Apparel Park at Kerala’s Special Economic Zone which operates under the scheme ‘Apparel Parks for Export’ (APE) of the Government of India. The present study was based on both quantitative and qualitative analysis. In the first, the secondary sources of data were collected from the source location (SEAM centre) and destination (KINFRA Park). The official figures and data were discussed and analyzed in order to find out the various dimensions of labour relations under globalization of production. In the second, the primary survey was conducted to make a comparative analysis of local and migrant female workers. The study is executed by taking 100 workers in total. The local workers comprised of 53% of the sample whereas the outside state workers were 47%. Even personal interviews with management staff, and workers were also made for collecting the information regarding the organisational structure, nature, and mode of recruitment, work environment, etc. The study shows the enormous presence of rural women migrant workers in KINFRA Apparel Park. A Public Private Partnership (PPP) arranged migration system is found as Skills for Employment in Apparel Manufacturing (SEAM) from where young women and girls are being sent to work in garment factories of Kerala’s KINFRA International Apparel Park under the guise of an apprenticeship based recruitment. The study concludes that such arrangements try to avoid standard employment relationships and strengthen informalization, casualization and contractualization of work. In this process, the recruitment of women migrant workers is to be considered as best option for the employers of private industries which could be more easily hired and fired.

Keywords: female migration, globalization, informalization, KINFRA apparel park

Procedia PDF Downloads 311
162 The Influence of a Radio Intervention on Farmers’ Practices in Climate Change Mitigation and Adaptation in Kilifi, Kenya

Authors: Fiona Mwaniki

Abstract:

Climate change is considered a serious threat to sustainable development globally and as one of the greatest ecological, economic and social challenges of our time. The global demand for food is projected to increase by 60% by 2050. Small holder farmers who are vulnerable to the adverse effects of climate change are expected to contribute to this projected demand. Effective climate change education and communication is therefore required for smallholder and subsistence farmers’ in order to build communities that are more climate change aware, prepared and resilient. In Kenya radio is the most important and dominant mass communication tool for agricultural extension. This study investigated the potential role of radio in influencing farmers’ understanding and use of climate change information. The broad aims of this study were three-fold. Firstly, to identify Kenyan farmers’ perceptions and responses to the impacts of climate change. Secondly, to develop radio programs that communicate climate change information to Kenyan farmers and thirdly, to evaluate the impact of information disseminated through radio on farmers’ understanding and responses to climate change mitigation and adaptation. This study was conducted within the farming community of Kilifi County, located along the Kenyan coast. Education and communication about climate change was undertaken using radio to make available information understandable to different social and cultural groups. A mixed methods pre-and post-intervention design that provided the opportunity for triangulating results from both quantitative and qualitative data was used. Quantitative and qualitative data was collected simultaneously, where quantitative data was collected through semi structured surveys with 421 farmers’ and qualitative data was derived from 11 focus group interviews, six interviews with key informants and nine climate change experts. The climate change knowledge gaps identified in the initial quantitative and qualitative data were used in developing radio programs. Final quantitative and qualitative data collection and analysis enabled an assessment of the impact of climate change messages aired through radio on the farming community in Kilifi County. Results of this study indicate that 32% of the farmers’ listened to the radio programs and 26% implemented technologies aired on the programs that would help them adapt to climate change. The most adopted technologies were planting drought tolerant crops including indigenous crop varieties, planting trees, water harvesting and use of manure. The proportion of farmers who indicated they knew “a fair amount” about climate change increased significantly (Z= -5.1977, p < 0.001) from 33% (at the pre intervention phase of this study) to 64% (post intervention). However, 68% of the farmers felt they needed “a lot more” information on agriculture interventions (43%), access to financial resources (21%) and the effects of climate change (15%). The challenges farmers’ faced when adopting the interventions included lack of access to financial resources (18%), high cost of adaptation measures (17%), and poor access to water (10%). This study concludes that radio effectively complements other agricultural extension methods and has the potential to engage farmers’ on climate change issues and motivate them to take action.

Keywords: climate change, climate change intervention, farmers, radio

Procedia PDF Downloads 315
161 Moral Decision-Making in the Criminal Justice System: The Influence of Gruesome Descriptions

Authors: Michel Patiño-Sáenz, Martín Haissiner, Jorge Martínez-Cotrina, Daniel Pastor, Hernando Santamaría-García, Maria-Alejandra Tangarife, Agustin Ibáñez, Sandra Baez

Abstract:

It has been shown that gruesome descriptions of harm can increase the punishment given to a transgressor. This biasing effect is mediated by negative emotions, which are elicited upon the presentation of gruesome descriptions. However, there is a lack of studies inquiring the influence of such descriptions on moral decision-making in people involved in the criminal justice system. Such populations are of special interest since they have experience dealing with gruesome evidence, but also formal education on how to assess evidence and gauge the appropriate punishment according to the law. Likewise, they are expected to be objective and rational when performing their duty, because their decisions can impact profoundly people`s lives. Considering these antecedents, the objective of this study was to explore the influence gruesome written descriptions on moral decision-making in this group of people. To that end, we recruited attorneys, judges and public prosecutors (Criminal justice group, CJ, n=30) whose field of specialty is criminal law. In addition, we included a control group of people who did not have a formal education in law (n=30), but who were paired in age and years of education with the CJ group. All participants completed an online, Spanish-adapted version of a moral decision-making task, which was previously reported in the literature and also standardized and validated in the Latin-American context. A series of text-based stories describing two characters, one inflicting harm on the other, were presented to participants. Transgressor's intentionality (accidental vs. intentional harm) and language (gruesome vs. plain) used to describe harm were manipulated employing a within-subjects and a between-subjects design, respectively. After reading each story, participants were asked to rate (a) the harmful action's moral adequacy, (b) the amount of punishment deserving the transgressor and (c) how damaging was his behavior. Results showed main effects of group, intentionality and type of language on all dependent measures. In both groups, intentional harmful actions were rated as significantly less morally adequate, were punished more severely and were deemed as more damaging. Moreover, control subjects deemed more damaging and punished more severely any type of action than the CJ group. In addition, there was an interaction between intentionality and group. People in the control group rated harmful actions as less morally adequate than the CJ group, but only when the action was accidental. Also, there was an interaction between intentionality and language on punishment ratings. Controls punished more when harm was described using gruesome language. However, that was not the case of people in the CJ group, who assigned the same amount of punishment in both conditions. In conclusion, participants with job experience in the criminal justice system or criminal law differ in the way they make moral decisions. Particularly, it seems that they are less sensitive to the biasing effect of gruesome evidence, which is probably explained by their formal education or their experience in dealing with such evidence. Nonetheless, more studies are needed to determine the impact this phenomenon has on the fulfillment of their duty.

Keywords: criminal justice system, emotions, gruesome descriptions, intentionality, moral decision-making

Procedia PDF Downloads 159
160 Crisis Management and Corporate Political Activism: A Qualitative Analysis of Online Reactions toward Tesla

Authors: Roxana D. Maiorescu-Murphy

Abstract:

In the US, corporations have recently embraced political stances in an attempt to respond to the external pressure exerted by activist groups. To date, research in this area remains in its infancy, and few studies have been conducted on the way stakeholder groups respond to corporate political advocacy in general and in the immediacy of such a corporate announcement in particular. The current study aims to fill in this research void. In addition, the study contributes to an emerging trajectory in the field of crisis management by focusing on the delineation between crises (unexpected events related to products and services) and scandals (crises that spur moral outrage). The present study looked at online reactions in the aftermath of Elon Musk’s endorsement of the Republican party on Twitter. Two data sets were collected from Twitter following two political endorsements made by Elon Musk on May 18, 2022, and June 15, 2022, respectively. The total sample of analysis stemming from the data two sets consisted of N=1,374 user comments written as a response to Musk’s initial tweets. Given the paucity of studies in the preceding research areas, the analysis employed a case study methodology, used in circumstances in which the phenomena to be studied had not been researched before. According to the case study methodology, which answers the questions of how and why a phenomenon occurs, this study responded to the research questions of how online users perceived Tesla and why they did so. The data were analyzed in NVivo by the use of the grounded theory methodology, which implied multiple exposures to the text and the undertaking of an inductive-deductive approach. Through multiple exposures to the data, the researcher ascertained the common themes and subthemes in the online discussion. Each theme and subtheme were later defined and labeled. Additional exposures to the text ensured that these were exhaustive. The results revealed that the CEO’s political endorsements triggered moral outrage, leading to Tesla’s facing a scandal as opposed to a crisis. The moral outrage revolved around the stakeholders’ predominant rejection of a perceived intrusion of an influential figure on a domain reserved for voters. As expected, Musk’s political endorsements led to polarizing opinions, and those who opposed his views engaged in online activism aimed to boycott the Tesla brand. These findings reveal that the moral outrage that characterizes a scandal requires communication practices that differ from those that practitioners currently borrow from the field of crisis management. Specifically, because scandals flourish in online settings, practitioners should regularly monitor stakeholder perceptions and address them in real-time. While promptness is essential when managing crises, it becomes crucial to respond immediately as a scandal is flourishing online. Finally, attempts should be made to distance a brand, its products, and its CEO from the latter’s political views.

Keywords: crisis management, communication management, Tesla, corporate political activism, Elon Musk

Procedia PDF Downloads 62
159 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 152
158 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer

Authors: Ruairi J. McMillan

Abstract:

Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.

Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics

Procedia PDF Downloads 35
157 Systematic Review of Dietary Fiber Characteristics Relevant to Appetite and Energy Intake Outcomes in Clinical Intervention Trials of Healthy Humans

Authors: K. S. Poutanen, P. Dussort, A. Erkner, S. Fiszman, K. Karnik, M. Kristensen, C. F. M. Marsaux, S. Miquel-Kergoat, S. Pentikäinen, P. Putz, R. E. Steinert, J. Slavin, D. J. Mela

Abstract:

Dietary fiber (DF) intake has been associated with lower body weight or less weight gain. These effects are generally attributed to putative effects of DF on appetite. Many intervention studies have tested the effect of DFs on appetite-related measures, with inconsistent results. However, DF includes a wide category of different compounds with diverse chemical and physical characteristics, and correspondingly diverse effects in human digestion. Thus, inconsistent results between DF consumption and appetite are not surprising. The specific contribution of different compounds with varying physico-chemical properties to appetite control and the mediating mechanisms are not well characterized. This systematic review aimed to assess the influence of specific DF characteristics, including viscosity, gel forming capacity, fermentability, and molecular weight, on appetite-related outcomes in healthy humans. Medline and FSTA databases were searched for controlled human intervention trials, testing the effects of well-characterized DFs on subjective satiety/appetite or energy intake outcomes. Studies were included only if they reported: 1) fiber name and origin, and 2) data on viscosity, gelling properties, fermentability, or molecular weight of the DF materials tested. The search generated 3001 unique records, 322 of which were selected for further consideration from title and abstract screening. Of these, 149 were excluded due to insufficient fiber characterization and 124 for other reasons (not original article, not randomized controlled trial, or no appetite related outcome), leaving 49 papers meeting all the inclusion criteria, most of which reported results from acute testing (<1 day). The eligible 49 papers described 90 comparisons of DFs in foods, beverages or supplements. DF-containing material of interest was efficacious for at least one appetite-related outcome in 51/90 comparisons. Gel-forming DF sources were most consistently efficacious but there were no clear associations between viscosity, MW or fermentability and appetite-related outcomes. A considerable number of papers had to be excluded from the review due to shortcomings in fiber characterization. To build understanding about the impact of DF on satiety/appetite specifically there should be clear hypotheses about the mechanisms behind the proposed beneficial effect of DF material on appetite, and sufficient data about the DF properties relevant for the hypothesized mechanisms to justify clinical testing. The hypothesized mechanisms should also guide the decision about relevant duration of exposure in studies, i.e. are the effects expected to occur during acute time frame (related to stomach emptying, digestion rate, etc.) or develop from sustained exposure (gut fermentation mediated mechanisms). More consistent measurement methods and reporting of fiber specifications and characterization are needed to establish reliable structure-function relationships for DF and health outcomes.

Keywords: appetite, dietary fiber, physico-chemical properties, satiety

Procedia PDF Downloads 202
156 Developing Telehealth-Focused Advanced Practice Nurse Educational Partnerships

Authors: Shelley Y. Hawkins

Abstract:

Introduction/Background: As technology has grown exponentially in healthcare, nurse educators must prepare Advanced Practice Registered Nurse (APRN) graduates with the knowledge and skills in information systems/technology to support and improve patient care and health care systems. APRN’s are expected to lead in caring for populations who lack accessibility and availability through the use of technology, specifically telehealth. The capacity to effectively and efficiently use technology in patient care delivery is clearly delineated in the American Association of Colleges of Nursing (AACN) Doctor of Nursing Practice (DNP) and Master of Science in Nursing (MSN) Essentials. However, APRN’s have minimal, or no, exposure to formalized telehealth education and lack necessary technical skills needed to incorporate telehealth into their patient care. APRN’s must successfully master the technology using telehealth/telemedicine, electronic health records, health information technology, and clinical decision support systems to advance health. Furthermore, APRN’s must be prepared to lead the coordination and collaboration with other healthcare providers in their use and application. Aim/Goal/Purpose: The purpose of this presentation is to establish and operationalize telehealth-focused educational partnerships between one University School of Nursing and two health care systems in order to enhance the preparation of APRN NP students for practice, teaching, and/or scholarly endeavors. Methods: The proposed project was initially presented by the project director to selected multidisciplinary stakeholders including leadership, home telehealth personnel, primary care providers, and decision support systems within two major health care systems to garner their support for acceptance and implementation. Concurrently, backing was obtained from key university-affiliated colleagues including the Director of Simulation and Innovative Learning Lab and Coordinator of the Health Care Informatics Program. Technology experts skilled in design and production in web applications and electronic modules were secured from two local based technology companies. Results: Two telehealth-focused APRN Program academic/practice partnerships have been established. Students have opportunities to engage in clinically based telehealth experiences focused on: (1) providing patient care while incorporating various technology with a specific emphasis on telehealth; (2) conducting research and/or evidence-based practice projects in order to further develop the scientific foundation regarding incorporation of telehealth with patient care; and (3) participating in the production of patient-level educational materials related to specific topical areas. Conclusions: Evidence-based APRN student telehealth clinical experiences will assist in preparing graduates who can effectively incorporate telehealth into their clinical practice. Greater access for diverse populations will be available as a result of the telehealth service model as well as better care and better outcomes at lower costs. Furthermore, APRN’s will provide the necessary leadership and coordination through interprofessional practice by transforming health care through new innovative care models using information systems and technology.

Keywords: academic/practice partnerships, advanced practice nursing, nursing education, telehealth

Procedia PDF Downloads 216
155 South-Mediterranean Oaks Forests Management in Changing Climate Case of the National Park of Tlemcen-Algeria

Authors: K. Bencherif, M. Bellifa

Abstract:

The expected climatic changes in North Africa are the increase of both intensity and frequencies of the summer droughts and a reduction in water availability during growing season. The exiting coppices and forest formations in the national park of Tlemcen are dominated by holm oak, zen oak and cork oak. These opened-fragmented structures don’t seem enough strong so to hope durable protection against climate change. According to the observed climatic tendency, the objective is to analyze the climatic context and its evolution taking into account the eventual behaving of the oak species during the next 20-30 years on one side and the landscaped context in relation with the most adequate sylvicultural models to choose and especially in relation with human activities on another side. The study methodology is based on Climatic synthesis and Floristic and spatial analysis. Meteorological data of the decade 1989-2009 are used to characterize the current climate. An another approach, based on dendrochronological analysis of a 120 years sample Aleppo pine stem growing in the park, is used so to analyze the climate evolution during one century. Results on the climate evolution during the 50 years obtained through climatic predictive models are exploited so to predict the climate tendency in the park. Spatially, in each forest unit of the Park, stratified sampling is achieved so to reduce the degree of heterogeneity and to easily delineate different stands using the GPS. Results from precedent study are used to analyze the anthropogenic factor considering the forecasts for the period 2025-2100, the number of warm days with a temperature over 25°C would increase from 30 to 70. The monthly mean temperatures of the maxima’s (M) and the minima’s (m) would pass respectively from 30.5°C to 33°C and from 2.3°C to 4.8°C. With an average drop of 25%, precipitations will be reduced to 411.37 mm. These new data highlight the importance of the risk fire and the water stress witch would affect the vegetation and the regeneration process. Spatial analysis highlights the forest and the agricultural dimensions of the park compared to the urban habitat and bare soils. Maps show both fragmentation state and forest surface regression (50% of total surface). At the level of the park, fires affected already all types of covers creating low structures with various densities. On the silvi cultural plan, Zen oak form in some places pure stands and this invasion must be considered as a natural tendency where Zen oak becomes the structuring specie. Climate-related changes have nothing to do with the real impact that South-Mediterranean forests are undergoing because human constraints they support. Nevertheless, hardwoods stand of oak in the national park of Tlemcen will face up to unexpected climate changes such as changing rainfall regime associated with a lengthening of the period of water stress, to heavy rainfall and/or to sudden cold snaps. Faced with these new conditions, management based on mixed uneven aged high forest method promoting the more dynamic specie could be an appropriate measure.

Keywords: global warming, mediterranean forest, oak shrub-lands, Tlemcen

Procedia PDF Downloads 367
154 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations

Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra

Abstract:

The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.

Keywords: electromyography, EMG normalization, functional EMG, older adults

Procedia PDF Downloads 67
153 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

Authors: Pedro Llanos, Diego García

Abstract:

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin

Procedia PDF Downloads 96
152 The Stable Isotopic Composition of Pedogenic Carbonate in the Minusinsk Basin, South Siberia

Authors: Jessica Vasil'chuk, Elena Ivanova, Pavel Krechetov, Vladimir Litvinsky, Nadine Budantseva, Julia Chizhova, Yurij Vasil'chuk

Abstract:

Carbonate minerals’ isotopic composition is widely used as a proxy for environmental parameters of the past. Pedogenic carbonate coatings on lower surfaces of coarse rock fragments are studied in order to indicate the climatic conditions and predominant vegetation under which they were formed. The purpose of the research is to characterize the isotopic composition of carbonate pedofeatures in soils of Minusink Hollow and estimate its correlation with isotopic composition of soil pore water, precipitation, vegetation and parent material. The samples of pedogenic carbonates, vegetation, carbonate parent material, soil water and precipitation water were analyzed using the Delta-V mass spectrometer with options of a gas bench and element analyser. The soils we studied are mainly Kastanozems that are poorly moisturized, therefore soil pore water was extracted by ethanol. Oxygen and carbon isotopic composition of pedogenic carbonates was analyzed in 3 key sites. Kazanovka Khakass state national reserve, Hankul salt lake, region of Sayanogorsk aluminum smelter. Vegetation photosynthetic pathway in the region is mainly C3. δ18O values of carbonate coatings in soils of Kazanovka vary in a range from −7.49 to −10.5‰ (vs V-PDB), and the smallest value −13.9‰ corresponds the coatings found between two buried soil horizons which 14C dates are 4.6 and 5.2 kyr BP. That may indicate cooler conditions of late Holocene than nowadays. In Sayanogorsk carbonates’ δ18O range is from −8.3 to −11.1‰ and near the Hankul Lake is from −9.0 to −10.2‰ all ranges are quite similar and may indicate coatings’ uniform formation conditions. δ13C values of carbonate coatings in Kazanovka vary from −2.5 to −6.7‰, the highest values correspond to the soils of Askiz and Syglygkug rivers former floodplains. For Sayanogorsk the range is from −4.9 to −6.8‰ and for Hankul from −2.3 to −5.7‰, where the highest value is for the modern salt crust. δ13C values of coatings strongly decrease from inner (older) to outer (younger) layers of coatings, that can indicate differences connected with the diffusion of organic material. Carbonate parent material δ18O value in the region vary from −11.1 to −12.0‰ and δ13C values vary from −4.9 to −5.7‰. Soil pore water δ18O values that determine the oxygen isotope composition of carbonates vary due to the processes of transpiration and mixing in the studied sites in a wide range of −2.0 to −13.5‰ (vs V-SMOW). Precipitation waters show δ18O values from -6.6‰ in May and -19.0‰ in January (snow) due to the temperature difference. The main conclusions are as follows: pedogenic carbonates δ13C values (−7…−2,5‰) show no correlation with modern C3 vegetation δ13C values (−30…−26‰), expected values under such vegetation are (−19…−15‰) but are closer to C4 vegetation. Late Holocene climate for the Minusinsk Hollow according to obtained data on isotope composition of carbonates and soil pore water chemical composition was dryer and cooler than present, that does not contradict with paleocarpology data obtained for the region. The research was supported by Russian Science Foundation (grant №14-27-00083).

Keywords: carbon, oxygen, pedogenic carbonates, South Siberia, stable isotopes

Procedia PDF Downloads 260
151 Applying an Automatic Speech Intelligent System to the Health Care of Patients Undergoing Long-Term Hemodialysis

Authors: Kuo-Kai Lin, Po-Lun Chang

Abstract:

Research Background and Purpose: Following the development of the Internet and multimedia, the Internet and information technology have become crucial avenues of modern communication and knowledge acquisition. The advantages of using mobile devices for learning include making learning borderless and accessible. Mobile learning has become a trend in disease management and health promotion in recent years. End-stage renal disease (ESRD) is an irreversible chronic disease, and patients who do not receive kidney transplants can only rely on hemodialysis or peritoneal dialysis to survive. Due to the complexities in caregiving for patients with ESRD that stem from their advanced age and other comorbidities, the patients’ incapacity of self-care leads to an increase in the need to rely on their families or primary caregivers, although whether the primary caregivers adequately understand and implement patient care is a topic of concern. Therefore, this study explored whether primary caregivers’ health care provisions can be improved through the intervention of an automatic speech intelligent system, thereby improving the objective health outcomes of patients undergoing long-term dialysis. Method: This study developed an automatic speech intelligent system with healthcare functions such as health information voice prompt, two-way feedback, real-time push notification, and health information delivery. Convenience sampling was adopted to recruit eligible patients from a hemodialysis center at a regional teaching hospital as research participants. A one-group pretest-posttest design was adopted. Descriptive and inferential statistics were calculated from the demographic information collected from questionnaires answered by patients and primary caregivers, and from a medical record review, a health care scale (recorded six months before and after the implementation of intervention measures), a subjective health assessment, and a report of objective physiological indicators. The changes in health care behaviors, subjective health status, and physiological indicators before and after the intervention of the proposed automatic speech intelligent system were then compared. Conclusion and Discussion: The preliminary automatic speech intelligent system developed in this study was tested with 20 pretest patients at the recruitment location, and their health care capacity scores improved from 59.1 to 72.8; comparisons through a nonparametric test indicated a significant difference (p < .01). The average score for their subjective health assessment rose from 2.8 to 3.3. A survey of their objective physiological indicators discovered that the compliance rate for the blood potassium level was the most significant indicator; its average compliance rate increased from 81% to 94%. The results demonstrated that this automatic speech intelligent system yielded a higher efficacy for chronic disease care than did conventional health education delivered by nurses. Therefore, future efforts will continue to increase the number of recruited patients and to refine the intelligent system. Future improvements to the intelligent system can be expected to enhance its effectiveness even further.

Keywords: automatic speech intelligent system for health care, primary caregiver, long-term hemodialysis, health care capabilities, health outcomes

Procedia PDF Downloads 88
150 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 63
149 Analysis of Fish Preservation Methods for Traditional Fishermen Boat

Authors: Kusno Kamil, Andi Asni, Sungkono

Abstract:

According to a report of the World Food and Agriculture Agency (FAO): the post-harvest fish losses in Indonesia reaches 30 percent from 170 trillion rupiahs of marine fisheries reserves, then the potential loss reaches 51 trillion rupiahs (end of 2016 data). This condition is caused by traditionally vulnerable fish catches damaged due to disruption of the cold chain of preservation. The physical and chemical changes in fish flesh increase rapidly, especially if exposed to the scorching heat in the middle of the sea, exacerbated by the low awareness of catch hygiene; many unclean catches which contain blood are often treated without special attention and mixed with freshly caught fish, thereby increasing the potential for faster fish spoilage. This background encourages research on traditional fisherman catch preservation methods that aim to find the best and most affordable methods and/or combinations of fish preservation methods so that they can help fishermen increase their fishing duration without worrying that their catch will be damaged, thereby reducing their economic value when returning to the beach to sell their catches. This goal is expected to be achieved through experimental methods of treatment of fresh fish catches in containers with the addition of anti-bacterial copper, liquid smoke solution, and the use of vacuum containers. The other three treatments combined the three previous treatment variables with an electrically powered cooler (temperature 0~4 ᵒC). As a control specimen, the untreated fresh fish (placed in the open air and in the refrigerator) were also prepared for comparison for 1, 3, and 6 days. To test the level of freshness of fish for each treatment, physical observations were used, which were complemented by tests for bacterial content in a trusted laboratory. The content of copper (Cu) in fish meat (which is suspected of having a negative impact on consumers) was also part of the examination on the 6th day of experimentation. The results of physical observations on the test specimens (organoleptic method) showed that preservation assisted by the use of coolers was still better for all treatment variables. The specimens, without cooling, sequentially showed that the best preservation effectiveness was the addition of copper plates, the use of vacuum containers, and then liquid smoke immersion. Especially for liquid smoke, soaking for 6 days of preservation makes the fish meat soft and easy to crumble, even though it doesn't have a bad odor. The visual observation was then complemented by the results of testing the amount of growth (or retardation) of putrefactive bacteria in each treatment of test specimens within similar observation periods. Laboratory measurements report that the minimum amount of putrefactive bacteria achieved by preservation treatment combining cooler with liquid smoke (sample A+), then cooler only (D+), copper layer inside cooler (B+), vacuum container inside cooler (C+), respectively. Other treatments in open air produced a hundred times more putrefactive bacteria. In addition, treatment of the copper layer contaminated the preserved fresh fish more than a thousand times bigger compared to the initial amount, from 0.69 to 1241.68 µg/g.

Keywords: fish, preservation, traditional, fishermen, boat

Procedia PDF Downloads 48
148 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System

Authors: A. Chávez, A. Rodríguez, F. Pinzón

Abstract:

Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.

Keywords: sludge, landfill, leachate, SBR

Procedia PDF Downloads 245
147 The Monitor for Neutron Dose in Hadrontherapy Project: Secondary Neutron Measurement in Particle Therapy

Authors: V. Giacometti, R. Mirabelli, V. Patera, D. Pinci, A. Sarti, A. Sciubba, G. Traini, M. Marafini

Abstract:

The particle therapy (PT) is a very modern technique of non invasive radiotherapy mainly devoted to the treatment of tumours untreatable with surgery or conventional radiotherapy, because localised closely to organ at risk (OaR). Nowadays, PT is available in about 55 centres in the word and only the 20\% of them are able to treat with carbon ion beam. However, the efficiency of the ion-beam treatments is so impressive that many new centres are in construction. The interest in this powerful technology lies to the main characteristic of PT: the high irradiation precision and conformity of the dose released to the tumour with the simultaneous preservation of the adjacent healthy tissue. However, the beam interactions with the patient produce a large component of secondary particles whose additional dose has to be taken into account during the definition of the treatment planning. Despite, the largest fraction of the dose is released to the tumour volume, a non-negligible amount is deposed in other body regions, mainly due to the scattering and nuclear interactions of the neutrons within the patient body. One of the main concerns in PT treatments is the possible occurrence of secondary malignant neoplasm (SMN). While SMNs can be developed up to decades after the treatments, their incidence impacts directly life quality of the cancer survivors, in particular in pediatric patients. Dedicated Treatment Planning Systems (TPS) are used to predict the normal tissue toxicity including the risk of late complications induced by the additional dose released by secondary neutrons. However, no precise measurement of secondary neutrons flux is available, as well as their energy and angular distributions: an accurate characterization is needed in order to improve TPS and reduce safety margins. The project MONDO (MOnitor for Neutron Dose in hadrOntherapy) is devoted to the construction of a secondary neutron tracker tailored to the characterization of that secondary neutron component. The detector, based on the tracking of the recoil protons produced in double-elastic scattering interactions, is a matrix of thin scintillating fibres, arranged in layer x-y oriented. The final size of the object is 10 x 10 x 20 cm3 (squared 250µm scint. fibres, double cladding). The readout of the fibres is carried out with a dedicated SPAD Array Sensor (SBAM) realised in CMOS technology by FBK (Fondazione Bruno Kessler). The detector is under development as well as the SBAM sensor and it is expected to be fully constructed for the end of the year. MONDO will make data tacking campaigns at the TIFPA Proton Therapy Center of Trento, at the CNAO (Pavia) and at HIT (Heidelberg) with carbon ion in order to characterize the neutron component and predict the additional dose delivered on the patients with much more precision and to drastically reduce the actual safety margins. Preliminary measurements with charged particles beams and MonteCarlo FLUKA simulation will be presented.

Keywords: secondary neutrons, particle therapy, tracking detector, elastic scattering

Procedia PDF Downloads 203
146 Masstige and the New Luxury: An Exploratory Study on Cosmetic Brands Among Black African Woman

Authors: Melanie Girdharilall, Anjli Himraj, Shivan Bhagwandin, Marike Venter De Villiers

Abstract:

The allure of luxury has long been attractive, fashionable, mystifying, and complex. As globalisation and the popularity of social media continue to evolve, consumers are seeking status products. However, in emerging economies like South Africa, where 60% of the country lives in poverty, this desire is often far-fetched and out of reach to most of the consumers. As a result, luxury brands are introducing masstige products: products that are associated with luxury and status but within financial reach to the middle-class consumer. The biggest challenge that this industry faces is the lack of knowledge and expertise on black female’s hair composition and offering products that meet their intricate requirements. African consumers have unique hair types, and global brands often do not accommodate for the complex nature of their hair and their product needs. By gaining insight into this phenomenon, global cosmetic brands can benefit from brand expansion, product extensions, increased brand awareness, brand knowledge, and brand equity. The purpose of this study is to determine how cosmetic brands can leverage the concept of masstige products to cater to the needs of middle-income black African woman. This study explores the 18- to 35-year-old black female cohort, which comprises approximately 17% of the South African population. The black hair care industry in Africa is expected a 6% growth rate over the next 5 years. The study is grounded in Paul’s (2019) 3-phase model for masstige marketing. This model demonstrates that product, promotion, and place strategies play a significant role in masstige value creation and the impact of these strategies on the branding dimensions (brand trust, brand association, brand positioning, brand preference, etc.).More specifically, this theoretical framework encompasses nine stages, or dimensions, that are of critical importance to companies who plan to infiltrate the masstige market. In short, the most critical components to consider are the positioning of the product and its competitive advantage in comparison to competitors. Secondly, advertising appeals and use of celebrities, and lastly, distribution channels such as online or in-store while maintain the exclusivity of the brand. By means of an exploratory study, a qualitative approach was undertaken, and focus groups were conducted among black African woman. The focus groups were voice recorded, transcribed, and analysed using Atlas software. The main themes were identified and used to provide brands with insight and direction for developing a comprehensive marketing mix for effectively entering the masstige market. The findings of this study will provide marketing practitioners with in-depth insight into how to effectively position masstige brands in line with consumer needs. It will give direction to both existing and new brands aiming to enter this market, by giving a comprehensive marketing mix for targeting the growing black hair care industry in Africa.

Keywords: africa, masstige, cosmetics, hard care, black females

Procedia PDF Downloads 63
145 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 252
144 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance

Authors: Ammar Alali, Mahmoud Abughaban

Abstract:

Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.

Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe

Procedia PDF Downloads 187
143 Comparative Research on Culture-Led Regeneration across Cities in China

Authors: Fang Bin Guo, Emma Roberts, Haibin Du, Yonggang Wang, Yu Chen, Xiuli Ge

Abstract:

This paper explores the findings so far from a major externally-funded project which operates internationally in China, Germany and the UK. The research team is working in the context of the redevelopment of post-industrial sites in China and how these might be platforms for creative enterprises and thereby, the economy and welfare to flourish. Results from the project are anticipated to inform urban design policies in China and possibly farther afield. The research has utilised ethnographic studies and participatory design methods to investigate alternative strategies for sustainable urban renewal of China’s post-industrial areas. Additionally, it has undertaken comparative studies of successful examples of European and Chinese urban regeneration cases. The international cross-disciplinary team has been seeking different opportunities for developing relevant creative industries whilst retaining cultural and industrial heritage. This paper will explore the research conducted so far by the team and offer initial findings. Findings point out the development challenges of cities respecting the protection of local culture/heritages, history of the industries and transformation of the local economies. The preliminary results and pilot analysis of the current research have demonstrated that local government policyholders, business investors/developers and creative industry practitioners are the three major stakeholders that will impact city revitalisations. These groups are expected to work together with asynchronous vision in order for redevelopments to be successful. Meanwhile, local geography, history, culture, politics, economy and ethnography have been identified as important factors that impact on project design and development during urban transformations. Data is being processed from the team’s research conducted across the focal Western and Chinese cities. This has provided theoretical guidance and practical support to the development of significant experimental projects. Many were re-examined with a more international perspective, and adjustments have been based on the conclusions of the research. The observations and research are already generating design solutions in terms of ascertaining essential site components, layouts, visual design and practical facilities for regenerated sites. Two significant projects undertaken by this project team have been nominated by the central Chinese government as the most successful exemplars. They have been listed as outstanding national industry heritage projects; in particular, one of them was nominated by ArchDaily as Building of the Year 2019, and so this project outcome has made a substantial contribution to research and innovation. In summary, this paper will outline the funded project, discuss the work conducted so far, and pinpoint the initial discoveries. It will detail the future steps and indicate how these will impact on national and local governments in China, designers, local citizens and building users.

Keywords: cultural & industrial heritages, ethnographic research, participatory design, regeneration of post-industrial sites, sustainable

Procedia PDF Downloads 126