Search results for: slip parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2254

Search results for: slip parameter

1684 Choosing an Optimal Epsilon for Differentially Private Arrhythmia Analysis

Authors: Arin Ghazarian, Cyril Rakovski

Abstract:

Differential privacy has become the leading technique to protect the privacy of individuals in a database while allowing useful analysis to be done and the results to be shared. It puts a guarantee on the amount of privacy loss in the worst-case scenario. Differential privacy is not a toggle between full privacy and zero privacy. It controls the tradeoff between the accuracy of the results and the privacy loss using a single key parameter called

Keywords: arrhythmia, cardiology, differential privacy, ECG, epsilon, medi-cal data, privacy preserving analytics, statistical databases

Procedia PDF Downloads 153
1683 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 89
1682 Implementing of Indoor Air Quality Index in Hong Kong

Authors: Kwok W. Mui, Ling T. Wong, Tsz W. Tsang

Abstract:

Many Hong Kong people nowadays spend most of their lifetime working indoor. Since poor Indoor Air Quality (IAQ) potentially leads to discomfort, ill health, low productivity and even absenteeism in workplaces, a call for establishing statutory IAQ control to safeguard the well-being of residents is urgently required. Although policies, strategies, and guidelines for workplace IAQ diagnosis have been developed elsewhere and followed with remedial works, some of those workplaces or buildings have relatively late stage of the IAQ problems when the investigation or remedial work started. Screening for IAQ problems should be initiated as it will provide information as a minimum provision of IAQ baseline requisite to the resolution of the problems. It is not practical to sample all air pollutants that exit. Nevertheless, as a statutory control, reliable, rapid screening is essential in accordance with a compromise strategy, which balances costs against detection of key pollutants. This study investigates the feasibility of using an IAQ index as a parameter of IAQ control in Hong Kong. The index is a screening parameter to identify the unsatisfactory workplace IAQ and will highlight where a fully effective IAQ monitoring and assessment is needed for an intensive diagnosis. There already exist a number of representative common indoor pollutants based on some extensive IAQ assessments. The selection of pollutants is surrogate to IAQ control consists of dilution, mitigation, and emission control. The IAQ Index and assessment will look at high fractional quantities of these common measurement parameters. With the support of the existing comprehensive regional IAQ database and the IAQ Index by the research team as the pre-assessment probability, and the unsatisfactory IAQ prevalence as the post-assessment probability from this study, thresholds of maintaining the current measures and performing a further IAQ test or IAQ remedial measures will be proposed. With justified resources, the proposed IAQ Index and assessment protocol might be a useful tool for setting up a practical public IAQ surveillance programme and policy in Hong Kong.

Keywords: assessment, index, indoor air quality, surveillance programme

Procedia PDF Downloads 268
1681 An Extension of the Generalized Extreme Value Distribution

Authors: Serge Provost, Abdous Saboor

Abstract:

A q-analogue of the generalized extreme value distribution which includes the Gumbel distribution is introduced. The additional parameter q allows for increased modeling flexibility. The resulting distribution can have a finite, semi-infinite or infinite support. It can also produce several types of hazard rate functions. The model parameters are determined by making use of the method of maximum likelihood. It will be shown that it compares favourably to three related distributions in connection with the modeling of a certain hydrological data set.

Keywords: extreme value theory, generalized extreme value distribution, goodness-of-fit statistics, Gumbel distribution

Procedia PDF Downloads 351
1680 Normal Weight Obesity among Female Students: BMI as a Non-Sufficient Tool for Obesity Assessment

Authors: Krzysztof Plesiewicz, Izabela Plesiewicz, Krzysztof Chiżyński, Marzenna Zielińska

Abstract:

Background: Obesity is an independent risk factor for cardiovascular diseases. There are several anthropometric parameters proposed to estimate the level of obesity, but until now there is no agreement which one is the best predictor of cardiometabolic risk. Scientists defined metabolically obese normal weight, who suffer from metabolic abnormalities, the same as obese individuals, and defined this syndrome as normal weight obesity (NWO). Aim of the study: The aim of our study was to determine the occurrence of overweight and obesity in a cohort of young, adult women, using standard and complementary methods of obesity assessment and to indicate those, who are at risk of obesity. The second aim of our study was to test additional methods of obesity assessment and proof that body mass index using alone is not sufficient parameter of obesity assessment. Materials and methods: 384 young women, aged 18-32, were enrolled into the study. Standard anthropometric parameters (waist to hips ratio (WTH), waist to height ratio (WTHR)) and two other methods of body fat percentage measurement (BFPM) were used in the study: electrical bioimpendance analysis (BIA) and skinfold measurement test by digital fat body mass clipper (SFM). Results: In the study group 5% and 7% of participants had waist to hips ratio and accordingly waist to height ratio values connected with visceral obesity. According to BMI 14% participants were overweight and obese. Using additional methods of body fat assessment, there were 54% and 43% of obese for BIA and SMF method. In the group of participants with normal BMI and underweight (not overweight, n =340) there were individuals with the level of BFPM above the upper limit, for the BIA 49% (n =164) and for the SFM 36 % (n=125). Statistical analysis revealed strong correlation between BIA and SFM methods. Conclusion: BMI using alone is not a sufficient parameter of obesity assessment. High percentage of young women with normal BMI values seem to be normal weight obese.

Keywords: electrical bioimpedance, normal weight obesity, skin-fold measurement test, women

Procedia PDF Downloads 275
1679 Optimal Design of Wind Turbine Blades Equipped with Flaps

Authors: I. Kade Wiratama

Abstract:

As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.

Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero

Procedia PDF Downloads 337
1678 Non-Linear Dynamic Analyses of Grouted Pile-Sleeve Connection

Authors: Mogens Saberi

Abstract:

The focus of this article is to present the experience gained from the design of a grouted pile-sleeve connection and to present simple design expressions which can be used in the preliminary design phase of such connections. The grout pile-sleeve connection serves as a connection between an offshore jacket foundation and pre-installed piles located in the seabed. The jacket foundation supports a wind turbine generator resulting in significant dynamic loads on the connection. The connection is designed with shear keys in order to optimize the overall design but little experience is currently available in the use of shear keys in such connections. It is found that the consequence of introducing shear keys in the design is a very complex stress distribution which requires special attention due to significant fatigue loads. An optimal geometrical shape of the shear keys is introduced in order to avoid large stress concentration factors and a relatively easy fabrication. The connection is analysed in ANSYS Mechanical where the grout is modelled by a non-linear material model which allows for cracking of the grout material and captures the elastic-plastic behaviour of the grout material. Special types of finite elements are used in the interface between the pile sleeve and the grout material to model the slip surface between the grout material and the steel. Based on the performed finite element modelling simple design expressions are introduced.

Keywords: fatigue design, non-linear finite element modelling, structural dynamics, simple design expressions

Procedia PDF Downloads 386
1677 New Employee on-Boarding Program: Effective Tool for Reducing the Prevalence of Workplace Injuries/Accidents

Authors: U. Ugochukwu, J. Lee, P. Conley

Abstract:

According to a recent survey by the UT Southwestern Workplace Safety Committee, the three most common on-the-job injuries reported by workers at the medical center are musculoskeletal injuries, slip-and-fall injuries and repetitive motion injuries. Last year alone, of the 650 documented workplace injuries and accidents, 45% were seen in employees in their first-two years of employment. UT Southwestern New Employee On-Boarding program was created and modeled to follows OSHA’s model that consist of: determining if training is needed, identifying training needs, identifying goals and objectives, developing learning activities, conducting the training, evaluating program effectiveness, and improving the program. The hospital’s management best practices were recreated to limit and control workplace injuries and accidents. Regular trainings and workshops on workplace safety and compliance were initiated for new employees. Various computer workstations were evaluated and recommendations were made to reduce musculoskeletal disorders. Post exposure protocols and workers protection programs were remodeled for infectious agents and chemicals used in the hospital, and medical surveillance programs were updated, for every emerging threat, to ensure they are in compliance with the US policy, regulatory and standard setting organizations. If ignorance of specific job hazards and of proper work practices is to blame for this higher injury rate, then training will help to provide a solution. Use of this program in training activities is just one of many ways UT Southwestern complied with the OSHA standards that relate to training while enhancing the safety and health of their employees.

Keywords: ergonomics, hazard, on-boarding, surveillance, workplace

Procedia PDF Downloads 330
1676 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 164
1675 On Bianchi Type Cosmological Models in Lyra’s Geometry

Authors: R. K. Dubey

Abstract:

Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.

Keywords: Bianchi type-I cosmological model, variable gravitational coupling, cosmological constant term, Lyra's model

Procedia PDF Downloads 355
1674 Navigating the Nexus of HIV/AIDS Care: Leveraging Statistical Insight to Transform Clinical Practice and Patient Outcomes

Authors: Nahashon Mwirigi

Abstract:

The management of HIV/AIDS is a global challenge, demanding precise tools to predict disease progression and guide tailored treatment. CD4 cell count dynamics, a crucial immune function indicator, play an essential role in understanding HIV/AIDS progression and enhancing patient care through effective modeling. While several models assess disease progression, existing methods often fall short in capturing the complex, non-linear nature of HIV/AIDS, especially across diverse demographics. A need exists for models that balance predictive accuracy with clinical applicability, enabling individualized care strategies based on patient-specific progression rates. This study utilizes patient data from Kenyatta National Hospital (2003–2014) to model HIV/AIDS progression across six CD4-defined states. The Exponential, 2-Parameter Weibull, and 3-Parameter Weibull models are employed to analyze failure rates and explore progression patterns by age and gender. Model selection is based on Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) to identify models best representing disease progression variability across demographic groups. The 3-Parameter Weibull model emerges as the most effective, accurately capturing HIV/AIDS progression dynamics, particularly by incorporating delayed progression effects. This model reflects age and gender-specific variations, offering refined insights into patient trajectories and facilitating targeted interventions. One key finding is that older patients progress more slowly through CD4-defined stages, with a delayed onset of advanced stages. This suggests that older patients may benefit from extended monitoring intervals, allowing providers to optimize resources while maintaining consistent care. Recognizing slower progression in this demographic helps clinicians reduce unnecessary interventions, prioritizing care for faster-progressing groups. Gender-based analysis reveals that female patients exhibit more consistent progression, while male patients show greater variability. This highlights the need for gender-specific treatment approaches, as men may require more frequent assessments and adaptive treatment plans to address their variable progression. Tailoring treatment by gender can improve outcomes by addressing distinct risk patterns in each group. The model’s ability to account for both accelerated and delayed progression equips clinicians with a robust tool for estimating the duration of each disease stage. This supports individualized treatment planning, allowing clinicians to optimize antiretroviral therapy (ART) regimens based on demographic factors and expected disease trajectories. Aligning ART timing with specific progression patterns can enhance treatment efficacy and adherence. The model also has significant implications for healthcare systems, as its predictive accuracy enables proactive patient management, reducing the frequency of advanced-stage complications. For resource limited providers, this capability facilitates strategic intervention timing, ensuring that high-risk patients receive timely care while resources are allocated efficiently. Anticipating progression stages enhances both patient care and resource management, reinforcing the model’s value in supporting sustainable HIV/AIDS healthcare strategies. This study underscores the importance of models that capture the complexities of HIV/AIDS progression, offering insights to guide personalized, data-informed care. The 3-Parameter Weibull model’s ability to accurately reflect delayed progression and demographic risk variations presents a valuable tool for clinicians, supporting the development of targeted interventions and resource optimization in HIV/AIDS management.

Keywords: HIV/AIDS progression, 3-parameter Weibull model, CD4 cell count stages, antiretroviral therapy, demographic-specific modeling

Procedia PDF Downloads 14
1673 Lactate in Critically Ill Patients an Outcome Marker with Time

Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah

Abstract:

Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≥ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.

Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism

Procedia PDF Downloads 242
1672 Search for EEG Correlates of Mental States Using EEG Neurofeedback Paradigm

Authors: Cyril Kaplan

Abstract:

26 participants played 4 EEG neurofeedback (NF) games encouraged to find their strategies to control the specific NF parameter. Mixed method analysis of performance in the games and post-session interviews led to the identification of states of consciousness that correlated with success in the game. We found that increase in left frontal beta activity was facilitated by evoking interest in observed surroundings, by wondering what is happening behind the window or what lies in a drawer in front.

Keywords: EEG neurofeedback, states of consciousness, frontal beta activity, mixed methods

Procedia PDF Downloads 142
1671 Establishment of Landslide Warning System Using Surface or Sub-Surface Sensors Data

Authors: Neetu Tyagi, Sumit Sharma

Abstract:

The study illustrates the results of an integrated study done on Tangni landslide located on NH-58 at Chamoli, Uttarakhand. Geological, geo-morphological and geotechnical investigations were carried out to understand the mechanism of landslide and to plan further investigation and monitoring. At any rate, the movements were favored by continuous rainfall water infiltration from the zones where the phyllites/slates and Dolomites outcrop. The site investigations were carried out including the monitoring of landslide movements and of the water level fluctuations due to rainfall give us a better understanding of landslide dynamics that have been causing in time soil instability at Tangni landslide site. The Early Warning System (EWS) installed different types of sensors and all sensors were directly connected to data logger and raw data transfer to the Defence Terrain Research Laboratory (DTRL) server room with the help of File Transfer Protocol (FTP). The slip surfaces were found at depths ranging from 8 to 10 m from Geophysical survey and hence sensors were installed to the depth of 15m at various locations of landslide. Rainfall is the main triggering factor of landslide. In this study, the developed model of unsaturated soil slope stability is carried out. The analysis of sensors data available for one year, indicated the sliding surface of landslide at depth between 6 to 12m with total displacement up to 6cm per year recorded at the body of landslide. The aim of this study is to set the threshold and generate early warning. Local peoples already alert towards landslide, if they have any types of warning system.

Keywords: early warning system, file transfer protocol, geo-morphological, geotechnical, landslide

Procedia PDF Downloads 158
1670 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement

Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji

Abstract:

The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.

Keywords: marginal productivity, DEA, production function, Kogi state

Procedia PDF Downloads 484
1669 Lithuanian Sign Language Literature: Metaphors at the Phonological Level

Authors: Anželika Teresė

Abstract:

In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.

Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics

Procedia PDF Downloads 137
1668 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations

Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li

Abstract:

The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.

Keywords: surface roughness, Taguchi parameter design, CNC turning, CNC milling

Procedia PDF Downloads 158
1667 Spatial Analysis of the Impact of City Developments Degradation of Green Space in Urban Fringe Eastern City of Yogyakarta Year 2005-2010

Authors: Pebri Nurhayati, Rozanah Ahlam Fadiyah

Abstract:

In the development of the city often use rural areas that can not be separated from the change in land use that lead to the degradation of urban green space in the city fringe. In the long run, the degradation of green open space this can impact on the decline of ecological, psychological and public health. Therefore, this research aims to (1) determine the relationship between the parameters of the degradation rate of urban development with green space, (2) develop a spatial model of the impact of urban development on the degradation of green open space with remote sensing techniques and Geographical Information Systems in an integrated manner. This research is a descriptive research with data collection techniques of observation and secondary data . In the data analysis, to interpret the direction of urban development and degradation of green open space is required in 2005-2010 ASTER image with NDVI. Of interpretation will generate two maps, namely maps and map development built land degradation green open space. Secondary data related to the rate of population growth, the level of accessibility, and the main activities of each city map is processed into a population growth rate, the level of accessibility maps, and map the main activities of the town. Each map is used as a parameter to map the degradation of green space and analyzed by non-parametric statistical analysis using Crosstab thus obtained value of C (coefficient contingency). C values were then compared with the Cmaximum to determine the relationship. From this research will be obtained in the form of modeling spatial map of the City Development Impact Degradation Green Space in Urban Fringe eastern city of Yogyakarta 2005-2010. In addition, this research also generate statistical analysis of the test results of each parameter to the degradation of green open space in the Urban Fringe eastern city of Yogyakarta 2005-2010.

Keywords: spatial analysis, urban development, degradation of green space, urban fringe

Procedia PDF Downloads 314
1666 Analysis of Direct Current Motor in LabVIEW

Authors: E. Ramprasath, P. Manojkumar, P. Veena

Abstract:

DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines have been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. A new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.

Keywords: analysis, characteristics, direct current motor, LabVIEW software, simulation

Procedia PDF Downloads 553
1665 Waist Circumference-Related Performance of Tense Indices during Varying Pediatric Obesity States and Metabolic Syndrome

Authors: Mustafa Metin Donma

Abstract:

Obesity increases the risk of elevated blood pressure, which is a metabolic syndrome (MetS) component. Waist circumference (WC) is accepted as an indispensable parameter for the evaluation of these health problems. The close relationship of height with blood pressure values revealed the necessity of including height in tense indices. The association of tense indices with WC has also become an increasingly important topic. The purpose of this study was to develop a tense index that could contribute to differential diagnosis of MetS more than the indices previously introduced. One hundred and ninety-four children, aged 06-11 years, were considered to constitute four groups. The study was performed on normal weight (Group 1), overweight+obese (Group 2), morbid obese [without (Group 3) and with (Group 4) MetS findings] children. Children were included in the groups according to the recommendations of World Health Organization based on age- and gender dependent body mass index percentiles. For MetS group, MetS components well-established before were considered. Anthropometric measurements, as well as blood pressure values were taken. Tense indices were computed. The formula for the first tense index was (SP+DP)/2. The second index was Advanced Donma Tense Index (ADTI). The formula for this index was [(SP+DP)/2] * Height. Statistical calculations were performed. 0.05 was accepted as the p value indicating statistical significance. There were no statistically significant differences between the groups for pulse pressure, systolic-to-diastolic pressure ratio and tense index. Increasing values were observed from Group 1 to Group 4 in terms of mean arterial blood pressure and advanced Donma tense index (ADTI), which was highly correlated with WC in all groups except Group 1. Both tense index and ADTI exhibited significant correlations with WC in Group 3. However, in Group 4, ADTI, which includes height parameter in the equation, was unique in establishing a strong correlation with WC. In conclusion, ADTI was suggested as a tense index while investigating children with MetS.

Keywords: blood pressure, child, height, metabolic syndrome, waist circumference

Procedia PDF Downloads 59
1664 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations

Authors: Meziane Belkacem

Abstract:

We aim at converting the original 3D Lorenz-Haken equations, which describe laser dynamics –in terms of self-pulsing and chaos- into 2-second-order differential equations, out of which we extract the so far missing mathematics and corroborations with respect to nonlinear interactions. Leaning on basic trigonometry, we pull out important outcomes; a fundamental result attributes chaos to forbidden periodic solutions inside some precisely delimited region of the control parameter space that governs the bewildering dynamics.

Keywords: Physics, optics, nonlinear dynamics, chaos

Procedia PDF Downloads 158
1663 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: Sahand Golmohammadi, Sana Hosseini Shirazi

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel

Procedia PDF Downloads 73
1662 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 377
1661 Foamability and Foam Stability of Gelatine-Sodium Dodecyl Sulfate Solutions

Authors: Virginia Martin Torrejon, Song Hang

Abstract:

Gelatine foams are widely explored materials due to their biodegradability, biocompatibility, and availability. They exhibit outstanding properties and are currently subject to increasing scientific research due to their potential use in different applications, such as biocompatible cellular materials for biomedical products or biofoams as an alternative to fossil-fuel-derived packaging. Gelatine is a highly surface-active polymer, and its concentrated solutions usually do not require surfactants to achieve low surface tension. Still, anionic surfactants like sodium dodecyl sulfate (SDS) strongly interact with gelatine, impacting its viscosity and rheological properties and, in turn, their foaming behaviour. Foaming behaviour is a key parameter for cellular solids produced by mechanical foaming as it has a significant effect on the processing and properties of cellular materials. Foamability mainly impacts the density and the mechanical properties of the foams, while foam stability is crucial to achieving foams with low shrinkage and desirable pore morphology. This work aimed to investigate the influence of SDS on the foaming behaviour of concentrated gelatine foams by using a dynamic foam analyser. The study of maximum foam height created, foam formation behaviour, drainage behaviour, and foam structure with regard to bubble size and distribution were carried out in 10 wt% gelatine solutions prepared at different SDS/gelatine concentration ratios. Comparative rheological and viscometry measurements provided a good correlation with the data from the dynamic foam analyser measurements. SDS incorporation at optimum dosages and gelatine gelation led to highly stable foams at high expansion ratios. The viscosity increase of the hydrogel solution at SDS content increased was a key parameter for foam stabilization. In addition, the impact of SDS content on gelling time and gel strength also considerably impacted the foams' stability and pore structure.

Keywords: dynamic foam analyser, gelatine foams stability and foamability, gelatine-surfactant foams, gelatine-SDS rheology, gelatine-SDS viscosity

Procedia PDF Downloads 154
1660 Health Hazards Among Health Care Workers and Associated Factors in Public Hospitals, Sana'a-Yemen

Authors: Makkia Ahmad Ali Al-Falahi, Abdullah Abdelaziz Muharram

Abstract:

Background: Healthcare workers (HCWs) in Yemen are exposed to a myriad of occupational health hazards, including biological, physical, ergonomic, chemical and psychosocial hazards. HCWs operate in an environment that is considered to be one of the most hazardous occupational settings. Objective: To assess the prevalence of occupational health hazards among healthcare workers and associated risk factors in public hospitals in Sana'a City, Yemen. Method: Descriptive cross-sectional design was utilized; out of 5443 totals of HCWs 396 were selected by multistage sampling technique was carried out in the public hospitals in Sana'a city, Yemen. Results: More the half (60.6%) of HCWs aged between 20-30 years, (50.8%) were males, (56.3%) were married, and (45.5%) had a diploma qualification, while (65.2%) of HCWs had less than 6 years of experience. The result showed that the highest prevalence of occupational hazards was (99%), (ergonomic hazards (93.4%), biological hazards (87.6%), psychosocial (86.65%), physical hazards (83.3%), and chemical hazards (73.5%). There were no statistically significant differences between demographic characteristics and the prevalence of occupational hazards (p >0.05). Conclusion and recommendations: The study showed the highest prevalence of occupational hazards; regarding the prevalence of biological hazards exposure to sharp-related injury, the most prevalent physical hazards were slip/trip/and fall. Ergonomic hazards had back or neck pain during work. Chemical hazards were allergic to medical gloves powder. On psychosocial hazards was suffered from verbal and physical harassment. The study concluded by raising awareness among HCWs by conducting training courses to prevent occupational hazards.

Keywords: health workers, occupational hazards, risk factors, the prevalence

Procedia PDF Downloads 84
1659 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process

Authors: Chenhao Zhu

Abstract:

Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.

Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper

Procedia PDF Downloads 142
1658 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 203
1657 Fast-Forward Problem in Asymmetric Double-Well Potential

Authors: Iwan Setiawan, Bobby Eka Gunara, Katshuhiro Nakamura

Abstract:

The theory to accelerate system on quantum dynamics has been constructed to get the desired wave function on shorter time. This theory is developed on adiabatic quantum dynamics which any regulation is done on wave function that satisfies Schrödinger equation. We show accelerated manipulation of WFs with the use of a parameter-dependent in asymmetric double-well potential and also when it’s influenced by electromagnetic fields.

Keywords: driving potential, Adiabatic Quantum Dynamics, regulation, electromagnetic field

Procedia PDF Downloads 342
1656 Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1

Authors: Kaoutar Lamrini Uahabi, Mohamed Atounti

Abstract:

In the present work, we consider one category of curves denoted by L(p, k, r, n). These curves are continuous arcs which are trajectories of roots of the trinomial equation zn = αzk + (1 − α), where z is a complex number, n and k are two integers such that 1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting by L the union of all trinomial curves L(p, k, r, n) and using the box counting dimension as fractal dimension, we will prove that the dimension of L is equal to 3/2.

Keywords: feasible angles, fractal dimension, Minkowski sausage, trinomial curves, trinomial equation

Procedia PDF Downloads 190
1655 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 271