Search results for: inductive parameter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2250

Search results for: inductive parameter

1740 Glaucoma Detection in Retinal Tomography Using the Vision Transformer

Authors: Sushish Baral, Pratibha Joshi, Yaman Maharjan

Abstract:

Glaucoma is a chronic eye condition that causes vision loss that is irreversible. Early detection and treatment are critical to prevent vision loss because it can be asymptomatic. For the identification of glaucoma, multiple deep learning algorithms are used. Transformer-based architectures, which use the self-attention mechanism to encode long-range dependencies and acquire extremely expressive representations, have recently become popular. Convolutional architectures, on the other hand, lack knowledge of long-range dependencies in the image due to their intrinsic inductive biases. The aforementioned statements inspire this thesis to look at transformer-based solutions and investigate the viability of adopting transformer-based network designs for glaucoma detection. Using retinal fundus images of the optic nerve head to develop a viable algorithm to assess the severity of glaucoma necessitates a large number of well-curated images. Initially, data is generated by augmenting ocular pictures. After that, the ocular images are pre-processed to make them ready for further processing. The system is trained using pre-processed images, and it classifies the input images as normal or glaucoma based on the features retrieved during training. The Vision Transformer (ViT) architecture is well suited to this situation, as it allows the self-attention mechanism to utilise structural modeling. Extensive experiments are run on the common dataset, and the results are thoroughly validated and visualized.

Keywords: glaucoma, vision transformer, convolutional architectures, retinal fundus images, self-attention, deep learning

Procedia PDF Downloads 181
1739 Ubiquitous Learning Environments in Higher Education: A Scoping Literature Review

Authors: Mari A. Virtanen, Elina Haavisto, Eeva Liikanen, Maria Kääriäinen

Abstract:

Ubiquitous learning and the use of ubiquitous learning environments herald a new era in higher education. Ubiquitous environments fuse together authentic learning situations and digital learning spaces where students can seamlessly immerse themselves into the learning process. Definitions of ubiquitous learning are wide and vary in the previous literature and learning environments are not systemically described. The aim of this scoping review was to identify the criteria and the use of ubiquitous learning environments in higher education contexts. The objective was to provide a clear scope and a wide view for this research area. The original studies were collected from nine electronic databases. Seven publications in total were defined as eligible and included in the final review. An inductive content analysis was used for the data analysis. The reviewed publications described the use of ubiquitous learning environments (ULE) in higher education. Components, contents and outcomes varied between studies, but there were also many similarities. In these studies, the concept of ubiquitousness was defined as context-awareness, embeddedness, content-personalization, location-based, interactivity and flexibility and these were supported by using smart devices, wireless networks and sensing technologies. Contents varied between studies and were customized to specific uses. Measured outcomes in these studies were focused on multiple aspects as learning effectiveness, cost-effectiveness, satisfaction, and usefulness. This study provides a clear scope for ULE used in higher education. It also raises the need for transparent development and publication processes, and for practical implications of ubiquitous learning environments.

Keywords: higher education, learning environment, scoping review, ubiquitous learning, u-learning

Procedia PDF Downloads 248
1738 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 145
1737 On Bianchi Type Cosmological Models in Lyra’s Geometry

Authors: R. K. Dubey

Abstract:

Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.

Keywords: Bianchi type-I cosmological model, variable gravitational coupling, cosmological constant term, Lyra's model

Procedia PDF Downloads 340
1736 The Concerns and Recommendations of Informal and Professional Caregivers for COVID-19 Policy for Homecare and Long-Term Care For People with Dementia: A Qualitative Study

Authors: Hanneke J. A. Smaling, Mandy Visser

Abstract:

One way to reduce the risk of COVID-19 infection is by preventing close interpersonal contact with distancing measures. These social distancing measures presented challenges to the health and wellbeing of people with dementia and their informal and professional caregivers. This study describes the concerns and recommendations of informal and professional caregivers for COVID-19 policy for home care and long-term care for people with dementia during the first and second COVID-19 wave in the Netherlands. In this qualitative interview study, 20 informal caregivers and 20 professional caregivers from home care services and long-term care participated. Interviews were analyzed using an inductive thematic analysis approach. Both informal and professional caregivers worried about getting infected or infecting others with COVID-19, the consequences of the distancing measures, and quality of care. There was a general agreement that policy in the second wave was better informed compared to the first wave. At an organizational level, the policy was remarkably flexible. Recommendations were given for dementia care (need to offer meaningful activities, improve the organization of care, more support for informal caregivers), policy (national vs. locally organization, social isolation measures, visitor policy), and communication. Our study contributes to the foundation of future care decisions by (inter)national policymakers, politicians, and healthcare organizations during the course of the COVID-19 pandemic, underlining the need for balance between safety and autonomy for people with dementia.

Keywords: covid-19, dementia, home care, long-term care, policy

Procedia PDF Downloads 123
1735 Lactate in Critically Ill Patients an Outcome Marker with Time

Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah

Abstract:

Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≥ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.

Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism

Procedia PDF Downloads 232
1734 Search for EEG Correlates of Mental States Using EEG Neurofeedback Paradigm

Authors: Cyril Kaplan

Abstract:

26 participants played 4 EEG neurofeedback (NF) games encouraged to find their strategies to control the specific NF parameter. Mixed method analysis of performance in the games and post-session interviews led to the identification of states of consciousness that correlated with success in the game. We found that increase in left frontal beta activity was facilitated by evoking interest in observed surroundings, by wondering what is happening behind the window or what lies in a drawer in front.

Keywords: EEG neurofeedback, states of consciousness, frontal beta activity, mixed methods

Procedia PDF Downloads 128
1733 Retrospective Interview with Amateur Soccer Officials Using Eye Tracker Footage

Authors: Lee Waters, Itay Basevitch, Matthew Timmis

Abstract:

Objectives: Eye tracking technology is a valuable method of assessing individuals gaze behaviour, but it does not unveil why they are engaging in certain practices. To address limitations in sport eye tracking research the present paper aims to investigate the gaze behaviours soccer officials engage in during successful and unsuccessful offside decisions, but also why. Methods: 20 male active amateur qualified (Level 4-7) soccer officials (Mage 22.5 SD 4.61 yrs) with an average experience of 41-50 games wore eye tracking technology during an applied attack versus defence drill. While reviewing the eye tracking footage, retrospective semi-structured interviews were conducted (M=20.4 min; SD=6.2; Range 11.7 – 26.8 min) and once transcribed inductive thematic analysis was performed. Findings and Discussion: To improve the understanding of gaze behaviours and how officials make sense of the environment, during the interview’s key constructs of offside, decision making, obstacles and emotions were summarised as the higher order themes while making offside decisions. Gaze anchoring was highlighted to be a successful technique to allow officials to see all relevant information, whereas the type of offside was emphasised to be a key factor in correct interpretation. Furthermore, specific decision-making training was outlined to be inconsistent and not always applicable. Conclusions: Key constructs have been identified and explained, which can be shared with soccer officials through training regimes. Eye tracking technology has also been shown to be a useful and innovative reflective tool to assist in the understanding of individuals gaze behaviours.

Keywords: eye tracking, gaze behvaiour, decision making, reflection

Procedia PDF Downloads 118
1732 Investigating Al₂O₃ Nanofluid Based on Seawater and Effluent Water Mix for Water Injection Application; Sandstone

Authors: Meshal Al-Samhan, Abdullah Al-Marshed

Abstract:

Recently, there has been a tremendous increase in interest in nanotechnology applications and nanomaterials in the oilfield. In the last decade, the global increase in oil production resulted in large amounts of produced water, causing a significant problem for all producing countries and companies. This produced water deserves special attention and a study of its characteristics to understand and determine how it can be treated and later used for suitable applications such as water injection for Enhance Oil Recovery (EOR) without harming the environment. This work aims to investigate the prepared compatible mixed water (seawater and effluent water) response to nanoparticles for EOR water injection. The evaluation of different mix seawater/effluent water ratios (60/40,70/30) for their characteristics prior to nanofluid preparation using Inductive Couple Plasma (ICP) analysis, potential zeta test, and OLI software (the OLI Systems is a recognised leader in aqueous chemistry). This step of the work revealed the suitability of the water mix with a lower effluent-water ratio. Also, OLI predicted that the 60:40 mix needs to be balanced around temperatures of 70 ºC to avoid the mass accumulation of calcium sulfate and strontium sulfate. Later the prepared nanofluid was tested for interfacial tension (IFT) and wettability restoration in the sandstone rock; the Al2O3 nanofluid at 0.06 wt% concentration reduced the IFT by more than 16% with moderate water wet contact angle. The study concluded that the selected nanoparticle Al2O3 had demonstrated excellent performance in decreasing the interfacial tension with respect to the selected water mix type (60/40) at low nanoparticles wt%.

Keywords: nano AL2O3, sanstone, nanofluid, IFT, wettability

Procedia PDF Downloads 84
1731 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement

Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji

Abstract:

The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.

Keywords: marginal productivity, DEA, production function, Kogi state

Procedia PDF Downloads 467
1730 Lithuanian Sign Language Literature: Metaphors at the Phonological Level

Authors: Anželika Teresė

Abstract:

In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.

Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics

Procedia PDF Downloads 127
1729 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations

Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li

Abstract:

The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.

Keywords: surface roughness, Taguchi parameter design, CNC turning, CNC milling

Procedia PDF Downloads 143
1728 Spatial Analysis of the Impact of City Developments Degradation of Green Space in Urban Fringe Eastern City of Yogyakarta Year 2005-2010

Authors: Pebri Nurhayati, Rozanah Ahlam Fadiyah

Abstract:

In the development of the city often use rural areas that can not be separated from the change in land use that lead to the degradation of urban green space in the city fringe. In the long run, the degradation of green open space this can impact on the decline of ecological, psychological and public health. Therefore, this research aims to (1) determine the relationship between the parameters of the degradation rate of urban development with green space, (2) develop a spatial model of the impact of urban development on the degradation of green open space with remote sensing techniques and Geographical Information Systems in an integrated manner. This research is a descriptive research with data collection techniques of observation and secondary data . In the data analysis, to interpret the direction of urban development and degradation of green open space is required in 2005-2010 ASTER image with NDVI. Of interpretation will generate two maps, namely maps and map development built land degradation green open space. Secondary data related to the rate of population growth, the level of accessibility, and the main activities of each city map is processed into a population growth rate, the level of accessibility maps, and map the main activities of the town. Each map is used as a parameter to map the degradation of green space and analyzed by non-parametric statistical analysis using Crosstab thus obtained value of C (coefficient contingency). C values were then compared with the Cmaximum to determine the relationship. From this research will be obtained in the form of modeling spatial map of the City Development Impact Degradation Green Space in Urban Fringe eastern city of Yogyakarta 2005-2010. In addition, this research also generate statistical analysis of the test results of each parameter to the degradation of green open space in the Urban Fringe eastern city of Yogyakarta 2005-2010.

Keywords: spatial analysis, urban development, degradation of green space, urban fringe

Procedia PDF Downloads 300
1727 Loving and Letting Go: Bounded Attachment in Creative Work

Authors: Greg Fetzer

Abstract:

One of the fundamental tensions of creative work is between the need to be passionate and persistent in advancing novel and risky ideas and the need to be flexible, revising, or even abandoning ideas in favor of others. The tension becomes fraught in part because of the attachment that creators have toward their ideas. Idea attachment is defined here as a multifaceted concept referring to affection, passion, and connection toward a target—in this case, one’s projects or ideas. Yet feeling attached can make creators resistant to feedback, making them less flexible and leading them to escalate commitment. Despite a growing understanding of how attachment develops and evolves in response to project changes, feedback, and creative jolts, we still know relatively little about the organizational dynamics that may shape idea attachment. Through a qualitative, inductive study of early-stage R&D scientists in the pharmaceutical industry, this research finds that scientists develop bounded attachment, a mindset that limits emotional attachment to ideas while still fostering engagement in idea development. This research develops a process model of how bounded attachment is developed and enacted across three stages of the creative process, idea generation, idea evaluation, and outcome assessment, as well as the role that organizational practices and professional identity play in shaping this process: these collective practices provided structures to ensure ideas were evaluated in a rational (i.e. non-emotional way) while also providing socioemotional support in the face of setbacks. Together, this process led to continued creative engagement across ideas in a portfolio and helped scientists construct a sense of meaningful work despite a high likelihood (and frequency) of failure.

Keywords: creativity, innovation, organizational practices, qualitative, attachment

Procedia PDF Downloads 51
1726 Investigating Non-suicidal Self-Injury Discussions on Twitter

Authors: Muhammad Abubakar Alhassan, Diane Pennington

Abstract:

Social networking sites have become a space for people to discuss public health issues such as non-suicidal self-injury (NSSI). There are thousands of tweets containing self-harm and self-injury hashtags on Twitter. It is difficult to distinguish between different users who participate in self-injury discussions on Twitter and how their opinions change over time. Also, it is challenging to understand the topics surrounding NSSI discussions on Twitter. We retrieved tweets using #selfham and #selfinjury hashtags and investigated those from the United kingdom. We applied inductive coding and grouped tweeters into different categories. This study used the Latent Dirichlet Allocation (LDA) algorithm to infer the optimum number of topics that describes our corpus. Our findings revealed that many of those participating in NSSI discussions are non-professional users as opposed to medical experts and academics. Support organisations, medical teams, and academics were campaigning positively on rais-ing self-injury awareness and recovery. Using LDAvis visualisation technique, we selected the top 20 most relevant terms from each topic and interpreted the topics as; children and youth well-being, self-harm misjudgement, mental health awareness, school and mental health support and, suicide and mental-health issues. More than 50% of these topics were discussed in England compared to Scotland, Wales, Ireland and Northern Ireland. Our findings highlight the advantages of using the Twitter social network in tackling the problem of self-injury through awareness. There is a need to study the potential risks associated with the use of social networks among self-injurers.

Keywords: self-harm, non-suicidal self-injury, Twitter, social networks

Procedia PDF Downloads 115
1725 Analysis of Direct Current Motor in LabVIEW

Authors: E. Ramprasath, P. Manojkumar, P. Veena

Abstract:

DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines have been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. A new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.

Keywords: analysis, characteristics, direct current motor, LabVIEW software, simulation

Procedia PDF Downloads 537
1724 Waist Circumference-Related Performance of Tense Indices during Varying Pediatric Obesity States and Metabolic Syndrome

Authors: Mustafa Metin Donma

Abstract:

Obesity increases the risk of elevated blood pressure, which is a metabolic syndrome (MetS) component. Waist circumference (WC) is accepted as an indispensable parameter for the evaluation of these health problems. The close relationship of height with blood pressure values revealed the necessity of including height in tense indices. The association of tense indices with WC has also become an increasingly important topic. The purpose of this study was to develop a tense index that could contribute to differential diagnosis of MetS more than the indices previously introduced. One hundred and ninety-four children, aged 06-11 years, were considered to constitute four groups. The study was performed on normal weight (Group 1), overweight+obese (Group 2), morbid obese [without (Group 3) and with (Group 4) MetS findings] children. Children were included in the groups according to the recommendations of World Health Organization based on age- and gender dependent body mass index percentiles. For MetS group, MetS components well-established before were considered. Anthropometric measurements, as well as blood pressure values were taken. Tense indices were computed. The formula for the first tense index was (SP+DP)/2. The second index was Advanced Donma Tense Index (ADTI). The formula for this index was [(SP+DP)/2] * Height. Statistical calculations were performed. 0.05 was accepted as the p value indicating statistical significance. There were no statistically significant differences between the groups for pulse pressure, systolic-to-diastolic pressure ratio and tense index. Increasing values were observed from Group 1 to Group 4 in terms of mean arterial blood pressure and advanced Donma tense index (ADTI), which was highly correlated with WC in all groups except Group 1. Both tense index and ADTI exhibited significant correlations with WC in Group 3. However, in Group 4, ADTI, which includes height parameter in the equation, was unique in establishing a strong correlation with WC. In conclusion, ADTI was suggested as a tense index while investigating children with MetS.

Keywords: blood pressure, child, height, metabolic syndrome, waist circumference

Procedia PDF Downloads 44
1723 On Deterministic Chaos: Disclosing the Missing Mathematics from the Lorenz-Haken Equations

Authors: Meziane Belkacem

Abstract:

We aim at converting the original 3D Lorenz-Haken equations, which describe laser dynamics –in terms of self-pulsing and chaos- into 2-second-order differential equations, out of which we extract the so far missing mathematics and corroborations with respect to nonlinear interactions. Leaning on basic trigonometry, we pull out important outcomes; a fundamental result attributes chaos to forbidden periodic solutions inside some precisely delimited region of the control parameter space that governs the bewildering dynamics.

Keywords: Physics, optics, nonlinear dynamics, chaos

Procedia PDF Downloads 146
1722 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: Sahand Golmohammadi, Sana Hosseini Shirazi

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel

Procedia PDF Downloads 57
1721 Evaluation of Weather Risk Insurance for Agricultural Products Using a 3-Factor Pricing Model

Authors: O. Benabdeljelil, A. Karioun, S. Amami, R. Rouger, M. Hamidine

Abstract:

A model for preventing the risks related to climate conditions in the agricultural sector is presented. It will determine the yearly optimum premium to be paid by a producer in order to reach his required turnover. The model is based on both climatic stability and 'soft' responses of usually grown species to average climate variations at the same place and inside a safety ball which can be determined from past meteorological data. This allows the use of linear regression expression for dependence of production result in terms of driving meteorological parameters, the main ones of which are daily average sunlight, rainfall and temperature. By simple best parameter fit from the expert table drawn with professionals, optimal representation of yearly production is determined from records of previous years, and yearly payback is evaluated from minimum yearly produced turnover. The model also requires accurate pricing of commodity at N+1. Therefore, a pricing model is developed using 3 state variables, namely the spot price, the difference between the mean-term and the long-term forward price, and the long-term structure of the model. The use of historical data enables to calibrate the parameters of state variables, and allows the pricing of commodity. Application to beet sugar underlines pricer precision. Indeed, the percentage of accuracy between computed result and real world is 99,5%. Optimal premium is then deduced and gives the producer a useful bound for negotiating an offer by insurance companies to effectively protect its harvest. The application to beet production in French Oise department illustrates the reliability of present model with as low as 6% difference between predicted and real data. The model can be adapted to almost any agricultural field by changing state parameters and calibrating their associated coefficients.

Keywords: agriculture, production model, optimal price, meteorological factors, 3-factor model, parameter calibration, forward price

Procedia PDF Downloads 361
1720 Foamability and Foam Stability of Gelatine-Sodium Dodecyl Sulfate Solutions

Authors: Virginia Martin Torrejon, Song Hang

Abstract:

Gelatine foams are widely explored materials due to their biodegradability, biocompatibility, and availability. They exhibit outstanding properties and are currently subject to increasing scientific research due to their potential use in different applications, such as biocompatible cellular materials for biomedical products or biofoams as an alternative to fossil-fuel-derived packaging. Gelatine is a highly surface-active polymer, and its concentrated solutions usually do not require surfactants to achieve low surface tension. Still, anionic surfactants like sodium dodecyl sulfate (SDS) strongly interact with gelatine, impacting its viscosity and rheological properties and, in turn, their foaming behaviour. Foaming behaviour is a key parameter for cellular solids produced by mechanical foaming as it has a significant effect on the processing and properties of cellular materials. Foamability mainly impacts the density and the mechanical properties of the foams, while foam stability is crucial to achieving foams with low shrinkage and desirable pore morphology. This work aimed to investigate the influence of SDS on the foaming behaviour of concentrated gelatine foams by using a dynamic foam analyser. The study of maximum foam height created, foam formation behaviour, drainage behaviour, and foam structure with regard to bubble size and distribution were carried out in 10 wt% gelatine solutions prepared at different SDS/gelatine concentration ratios. Comparative rheological and viscometry measurements provided a good correlation with the data from the dynamic foam analyser measurements. SDS incorporation at optimum dosages and gelatine gelation led to highly stable foams at high expansion ratios. The viscosity increase of the hydrogel solution at SDS content increased was a key parameter for foam stabilization. In addition, the impact of SDS content on gelling time and gel strength also considerably impacted the foams' stability and pore structure.

Keywords: dynamic foam analyser, gelatine foams stability and foamability, gelatine-surfactant foams, gelatine-SDS rheology, gelatine-SDS viscosity

Procedia PDF Downloads 140
1719 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process

Authors: Chenhao Zhu

Abstract:

Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.

Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper

Procedia PDF Downloads 128
1718 A Posterior Predictive Model-Based Control Chart for Monitoring Healthcare

Authors: Yi-Fan Lin, Peter P. Howley, Frank A. Tuyl

Abstract:

Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.

Keywords: average run length (ARL), bernoulli cusum (BC) chart, beta binomial posterior predictive (BBPP) distribution, clinical indicator (CI), healthcare organization (HCO), highest posterior density (HPD) interval

Procedia PDF Downloads 195
1717 Fast-Forward Problem in Asymmetric Double-Well Potential

Authors: Iwan Setiawan, Bobby Eka Gunara, Katshuhiro Nakamura

Abstract:

The theory to accelerate system on quantum dynamics has been constructed to get the desired wave function on shorter time. This theory is developed on adiabatic quantum dynamics which any regulation is done on wave function that satisfies Schrödinger equation. We show accelerated manipulation of WFs with the use of a parameter-dependent in asymmetric double-well potential and also when it’s influenced by electromagnetic fields.

Keywords: driving potential, Adiabatic Quantum Dynamics, regulation, electromagnetic field

Procedia PDF Downloads 328
1716 Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1

Authors: Kaoutar Lamrini Uahabi, Mohamed Atounti

Abstract:

In the present work, we consider one category of curves denoted by L(p, k, r, n). These curves are continuous arcs which are trajectories of roots of the trinomial equation zn = αzk + (1 − α), where z is a complex number, n and k are two integers such that 1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting by L the union of all trinomial curves L(p, k, r, n) and using the box counting dimension as fractal dimension, we will prove that the dimension of L is equal to 3/2.

Keywords: feasible angles, fractal dimension, Minkowski sausage, trinomial curves, trinomial equation

Procedia PDF Downloads 173
1715 Does Indian Intellectual Property Policy Affect the U. S. Pharmaceutical Industry? A Comparative Study of Pfizer and Ranbaxy Laboratories in Regards to Trade Related Aspects of Intellectual Property Rights

Authors: Alina Hamid Bari

Abstract:

Intellectual Property (IP) policies of a country have a huge impact on the pharmaceutical industry as this industry is all about patents. Developed countries have used IP protection to boost their economy; developing countries are concerned about access to medicine for poor people. U.S. company, Pfizer had a monopoly for 14 years for Lipitor and it all came to end when Pfizer decided to operate in India. This research will focus at the effects of Indian IP policies on USA by comparing Pfizer & Ranbaxy with regards to Trade Related Aspects of Intellectual Property Rights. For this research inductive approach has been used. Main source of material is Annual reports, theory based on academic books and articles along with rulings of court, policy statements and decisions, websites and newspaper articles. SWOT analysis is done for both Pfizer & Ranbaxy. The main comparison was done by doing ratio analysis and analyses of annual reports for the year 2011-2012 for Pfizer and Ranbaxy to see the impact on their profitability. This research concludes that Indian intellectual laws do affect the profitability of the U.S. pharmaceutical industry which can in turn have an impact on the US economy. These days India is only granting patents on products which it feels are deserving of it. So the U.S. companies operating in India have to defend their invention to get a patent. Thus, to operate in India and maintain monopoly in market, US firms have to come up with different strategies.

Keywords: atorvastatin, India, intellectual property, lipitor, Pfizer, pharmaceutical industry, Ranbaxy, TRIPs, U.S.

Procedia PDF Downloads 464
1714 An Analysis on Community Based Heritage Tourism: A Resource for a Small Community in Rural County Clare, Ireland

Authors: Marie Taylor, Catriona Murphy

Abstract:

The aim of this paper is to identify the factors of success in community based heritage tourism initiatives. Heritage and community are central to many tourism initiatives with heritage tourism having the potential to act as a catalyst for community development. This paper presents the findings of research that examined the relationship between heritage tourism and community development. The findings recognised that heritage tourism had economic, social and cultural benefits for a community as well as a role in strengthening concepts such as sense of identity, place, and authenticity. In addition, this paper proposes an assessment framework for sustainable community based heritage tourism to identify factors and contextual influences involved in their success or failure. In evaluating the sustainability of such initiatives, a number of issues are investigated including the continued role of stakeholders, the role of funding, the influence of collaboration and the changing role of rural development and its impact on community engagement. The research is descriptive, evaluative and explanatory research, exploring and analysing issues such as the development of community structures in community based heritage tourism. Thus, it will contribute to the development of potential tourism and community development policies and strategies at a local, national and international level. An interpretative and inductive approach is utilised, and a mixed method approach followed as it encapsulates the best of quantitative and qualitative research methods. The case studies focus on social enterprises in relation to tourism and community based tourism cooperatives as there are limited study and knowledge of these. Consequently, this research will contribute to the discourse on community based heritage tourism as an aspect of community development.

Keywords: collaboration, community-based heritage tourism, stakeholders, sustainable tourism

Procedia PDF Downloads 336
1713 Assessment of Heavy Metals and Radionuclide Concentrations in Mafikeng Waste Water Treatment Plant

Authors: M. Mathuthu, N. N. Gaxela, R. Y. Olobatoke

Abstract:

A study was carried out to assess the heavy metal and radionuclide concentrations of water from the waste water treatment plant in Mafikeng Local Municipality to evaluate treatment efficiency. Ten water samples were collected from various stages of water treatment which included sewage delivered to the plant, the two treatment stages and the effluent and also the community. The samples were analyzed for heavy metal content using Inductive Coupled Plasma Mass Spectrometer. Gross α/β activity concentration in water samples was evaluated by Liquid Scintillation Counting whereas the concentration of individual radionuclides was measured by gamma spectroscopy. The results showed marked reduction in the levels of heavy metal concentration from 3 µg/L (As)–670 µg/L (Na) in sewage into the plant to 2 µg/L (As)–170 µg/L (Fe) in the effluent. Beta activity was not detected in water samples except in the in-coming sewage, the concentration of which was within reference limits. However, the gross α activity in all the water samples (7.7-8.02 Bq/L) exceeded the 0.1 Bq/L limit set by World Health Organization (WHO). Gamma spectroscopy analysis revealed very high concentrations of 235U and 226Ra in water samples, with the lowest concentrations (9.35 and 5.44 Bq/L respectively) in the in-coming sewage and highest concentrations (73.8 and 47 Bq/L respectively) in the community water suggesting contamination along water processing line. All the values were considerably higher than the limits of South Africa Target Water Quality Range and WHO. However, the estimated total doses of the two radionuclides for the analyzed water samples (10.62 - 45.40 µSv yr-1) were all well below the reference level of the committed effective dose of 100 µSv yr-1 recommended by WHO.

Keywords: gross α/β activity, heavy metals, radionuclides, 235U, 226Ra, water sample

Procedia PDF Downloads 432
1712 Understanding Indigenous Perspectives and Critical Knowledge in International Law

Authors: Radhika Jagtap

Abstract:

Contemporary scholarship in international legal theory is investigating new avenues of providing alternatives to dominant concepts. Indigenous peoples’ philosophies and perspectives developed through them provide a fertile ground to explore similar alternative ideas. This review paper evaluates the theorized accounts of indigenous scholarships which have contributed towards a rich body of knowledge generating alternative visions on dominant notions of ‘post coloniality’, ‘resistance’ and ‘globalization’. Further, it shall assess the relevance of such a project in shaping contemporary international legal thought. Traditional or classical international law has been opined to be highly influenced by the colonial and imperialist history which also left a mark on the way dominant discourses of resistance and globalization are read in mainstream international law. The paper shall first define what do we mean by indigenous philosophy and what kind of indigeneity is that inclusive of. Second, the paper defines the dominant discourse and then counters the same with the alternative indigenous perspective in the case of each concept that is in question. Finally, the paper shall conclude with certain theoretical findings – that the post coloniality, from indigenous perspective, lead to the further marginalization of indigeneity, especially in the third world; that human rights as the sole means of representing resistance in international law ends up making it a very state-centric discipline and last, that globalization from an indigenous, marginalised perspective is not as celebrated as it is in mainstream international law. Major scholarly works that shall be central to the discussion are those of Linda Tuiwahi Smith, Ella Shohat and David Harvey. The nature of the research shall be inductive and involve mostly theoretical review of scholarly works.

Keywords: indigenous, post colonial, globalization, perspectives

Procedia PDF Downloads 322
1711 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 261