Search results for: linear predictive coding (LPC)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4760

Search results for: linear predictive coding (LPC)

740 Analyzing Nonsimilar Convective Heat Transfer in Copper/Alumina Nanofluid with Magnetic Field and Thermal Radiations

Authors: Abdulmohsen Alruwaili

Abstract:

A partial differential system featuring momentum and energy balance is often used to describe simulations of flow initiation and thermal shifting in boundary layers. The buoyancy force in terms of temperature is factored in the momentum balance equation. Buoyancy force causes the flow quantity to fluctuate along the streamwise direction š‘‹; therefore, the problem can be, to our best knowledge, analyzed through nonsimilar modeling. In this analysis, a nonsimilar model is evolved for radiative mixed convection of a magnetized power-law nanoliquid flow on top of a vertical plate installed in a stationary fluid. The upward linear stretching initiated the flow in the vertical direction. Assuming nanofluids are composite of copper (Cu) and alumina (Alā‚‚Oā‚ƒ) nanoparticles, the viscous dissipation in this case is negligible. The nonsimilar system is dealt with analytically by local nonsimilarity (LNS) via numerical algorithm bvp4c. Surface temperature and flow field are shown visually in relation to factors like mixed convection, magnetic field strength, nanoparticle volume fraction, radiation parameters, and Prandtl number. The repercussions of magnetic and mixed convection parameters on the rate of energy transfer and friction coefficient are represented in tabular forms. The results obtained are compared to the published literature. It is found that the existence of nanoparticles significantly improves the temperature profile of considered nanoliquid. It is also observed that when the estimates of the magnetic parameter increase, the velocity profile decreases. Enhancement in nanoparticle concentration and mixed convection parameter improves the velocity profile.

Keywords: nanofluid, power law model, mixed convection, thermal radiation

Procedia PDF Downloads 21
739 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC contentā€“and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model XĪ², where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers XĪ² as a bilinear model with both X and Ī² unknown. Joint estimation of X and Ī² is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and Ī². For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 137
738 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates

Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery

Abstract:

Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 Ā°C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25Ā°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.

Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop

Procedia PDF Downloads 90
737 Patient Perspectives on the Role of Orthopedic Nurse Practitioners: A Cross-Sectional Study

Authors: Merav Ben Natan, May Revach, Or Sade, Yaniv Yonay, Yaron Berkovich

Abstract:

Background: The inclusion of nurse practitioners (NPs) specializing in orthopedics holds promise for enhancing the quality of care for orthopedic patients. Understanding patientsā€™ perspectives on this role is crucial for evaluating the feasibility and acceptance of integrating NPs into orthopedic settings. This study aims to explore the receptiveness of orthopedic patients to treatment by orthopedic NPs and examines potential associations between patientsā€™ willingness to engage with NPs, their familiarity with the NP role, perceptions of nursing, and satisfaction with orthopedic nursing care. Methods: This cross-sectional study involved patients admitted to an orthopedic department at a central Israeli hospital between January and February 2023. Data was collected using a validated questionnaire consisting of five sections, reviewed by content experts. Statistical analyses were conducted using SPSS and included descriptive statistics, independent samples t-tests, Pearson correlations, and linear regression. Results: Participants in the study showed a moderate willingness to receive treatment from orthopedic NPs, with more than two-thirds expressing strong openness. Patients were generally receptive to NPs performing various clinical tasks, though there was less enthusiasm for NPsā€™ involvement in medication management and preoperative evaluations. Positive attitudes towards nurses and familiarity with the NP role were significant predictors of patient receptiveness to NP treatment. Conclusion: Patient acceptance of orthopedic NPs varies across different aspects of care. While there is a general willingness to receive care from NPs, these nuanced preferences must be considered when implementing NPs in orthopedic settings. Awareness and positive perceptions of the NP role play crucial roles in shaping patientsā€™ willingness to engage with NPs.

Keywords: orthopedic nurse practitioners, patient receptiveness, perceptions of nursing, clinical tasks

Procedia PDF Downloads 22
736 Understanding the Interactive Nature in Auditory Recognition of Phonological/Grammatical/Semantic Errors at the Sentence Level: An Investigation Based upon Japanese EFL Learnersā€™ Self-Evaluation and Actual Language Performance

Authors: Hirokatsu Kawashima

Abstract:

One important element of teaching/learning listening is intensive listening such as listening for precise sounds, words, grammatical, and semantic units. Several classroom-based investigations have been conducted to explore the usefulness of auditory recognition of phonological, grammatical and semantic errors in such a context. The current study reports the results of one such investigation, which targeted auditory recognition of phonological, grammatical, and semantic errors at the sentence level. 56 Japanese EFL learners participated in this investigation, in which their recognition performance of phonological, grammatical and semantic errors was measured on a 9-point scale by learnersā€™ self-evaluation from the perspective of 1) two types of similar English sound (vowel and consonant minimal pair words), 2) two types of sentence word order (verb phrase-based and noun phrase-based word orders), and 3) two types of semantic consistency (verb-purpose and verb-place agreements), respectively, and their general listening proficiency was examined using standardized tests. A number of findings have been made about the interactive relationships between the three types of auditory error recognition and general listening proficiency. Analyses based on the OPLS (Orthogonal Projections to Latent Structure) regression model have disclosed, for example, that the three types of auditory error recognition are linked in a non-linear way: the highest explanatory power for general listening proficiency may be attained when quadratic interactions between auditory recognition of errors related to vowel minimal pair words and that of errors related to noun phrase-based word order are embraced (R2=.33, p=.01).

Keywords: auditory error recognition, intensive listening, interaction, investigation

Procedia PDF Downloads 508
735 Effect of Concentration Level and Moisture Content on the Detection and Quantification of Nickel in Clay Agricultural Soil in Lebanon

Authors: Layan Moussa, Darine Salam, Samir Mustapha

Abstract:

Heavy metal contamination in agricultural soils in Lebanon poses serious environmental and health problems. Intensive efforts are employed to improve existing quantification methods of heavy metals in contaminated environments since conventional detection techniques have shown to be time-consuming, tedious, and costly. The implication of hyperspectral remote sensing in this field is possible and promising. However, factors impacting the efficiency of hyperspectral imaging in detecting and quantifying heavy metals in agricultural soils were not thoroughly studied. This study proposes to assess the use of hyperspectral imaging for the detection of Ni in agricultural clay soil collected from the Bekaa Valley, a major agricultural area in Lebanon, under different contamination levels and soil moisture content. Soil samples were contaminated with Ni, with concentrations ranging from 150 mg/kg to 4000 mg/kg. On the other hand, soil with background contamination was subjected to increased moisture levels varying from 5 to 75%. Hyperspectral imaging was used to detect and quantify Ni contamination in the soil at different contamination levels and moisture content. IBM SPSS statistical software was used to develop models that predict the concentration of Ni and moisture content in agricultural soil. The models were constructed using linear regression algorithms. The spectral curves obtained reflected an inverse correlation between both Ni concentration and moisture content with respect to reflectance. On the other hand, the models developed resulted in high values of predicted R2 of 0.763 for Ni concentration and 0.854 for moisture content. Those predictions stated that Ni presence was well expressed near 2200 nm and that of moisture was at 1900 nm. The results from this study would allow us to define the potential of using the hyperspectral imaging (HSI) technique as a reliable and cost-effective alternative for heavy metal pollution detection in contaminated soils and soil moisture prediction.

Keywords: heavy metals, hyperspectral imaging, moisture content, soil contamination

Procedia PDF Downloads 96
734 In vivo Mechanical Characterization of Facial Skin Combining Digital Image Correlation and Finite Element

Authors: Huixin Wei, Shibin Wang, Linan Li, Lei Zhou, Xinhao Tu

Abstract:

Facial skin is a biomedical material with complex mechanical properties of anisotropy, viscoelasticity, and hyperelasticity. The mechanical properties of facial skin are crucial for a number of applications including facial plastic surgery, animation, dermatology, cosmetic industry, and impact biomechanics. Skin is a complex multi-layered material which can be broadly divided into three main layers, the epidermis, the dermis, and the hypodermis. Collagen fibers account for 75% of the dry weight of dermal tissue, and it is these fibers which are responsible for the mechanical properties of skin. Many research on the anisotropic mechanical properties are mainly concentrated on in vitro, but there is a great difference between in vivo and in vitro for mechanical properties of the skin. In this study, we presented a method to measure the mechanical properties of facial skin in vivo. Digital image correlation (DIC) and indentation tests were used to obtain the experiment data, including the deformation of facial surface and indentation force-displacement curve. Then, the experiment was simulated using a finite element (FE) model. Application of Computed Tomography (CT) and reconstruction techniques obtained the real tissue geometry. A three-dimensional FE model of facial skin, including a bi-layer system, was obtained. As the epidermis is relatively thin, the epidermis and dermis were regarded as one layer and below it was hypodermis in this study. The upper layer was modeled as a Gasser-Ogden-Holzapfel (GOH) model to describe hyperelastic and anisotropic behaviors of the dermis. The under layer was modeled as a linear elastic model. In conclusion, the material properties of two-layer were determined by minimizing the error between the FE data and experimental data.

Keywords: facial skin, indentation test, finite element, digital image correlation, computed tomography

Procedia PDF Downloads 107
733 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition

Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria

Abstract:

Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.

Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses

Procedia PDF Downloads 152
732 Evaluation of the Gamma-H2AX Expression as a Biomarker of DNA Damage after X-Ray Radiation in Angiography Patients

Authors: Reza Fardid, Aliyeh Alipour

Abstract:

Introduction: Coronary heart disease (CHD) is the most common and deadliest diseases. A coronary angiography is an important tool for the diagnosis and treatment of this disease. Because angiography is performed by exposure to ionizing radiation, it can lead to harmful effects. Ionizing radiation induces double-stranded breaks in DNA, which is a potentially life-threatening injury. The purpose of the present study is an investigation of the phosphorylation of histone H2AX in the location of the double-stranded break in Peripheral blood lymphocytes as an indication of Biological effects of radiation on angiography patients. Materials and Methods: This method is based on measurement of the phosphorylation of histone (gamma-H2AX, gH2AX) level on serine 139 after formation of DNA double-strand break. 5 cc of blood from 24 patients with angiography were sampled before and after irradiation. Blood lymphocytes were removed, fixed and were stained with specific Ļ’H2AX antibodies. Finally, Ļ’H2AX signal as an indicator of the double-strand break was measured with Flow Cytometry Technique. Results and discussion: In all patients, an increase was observed in the number of breaks in double-stranded DNA after irradiation (20.15 Ā± 14.18) compared to before exposure (1.52 Ā± 0.34). Also, the mean of DNA double-strand break was showed a linear correlation with DAP. However, although induction of DNA double-strand breaks associated with radiation dose in patients, the effect of individual factors such as radiosensitivity and regenerative capacity should not be ignored. If in future we can measure DNA damage response in every patient angiography and it will be used as a biomarker patient dose, will look very impressive on the public health level. Conclusion: Using flow cytometry readings which are done automatically, it is possible to detect Ļ’H2AX in the number of blood cells. Therefore, the use of this technique could play a significant role in monitoring patients.

Keywords: coronary angiography, DSB of DNA, Ļ’H2AX, ionizing radiation

Procedia PDF Downloads 182
731 Validation of the Recovery of House Dust Mites from Fabrics by Means of Vacuum Sampling

Authors: A. Aljohani, D. Burke, D. Clarke, M. Gormally, M. Byrne, G. Fleming

Abstract:

Introduction: House Dust Mites (HDMs) are a source of allergen particles embedded in textiles and furnishings. Vacuum sampling is commonly used to recover and determine the abundance of HDMs but the efficiency of this method is less than standardized. Here, the efficiency of recovery of HDMs was evaluated from home-associated textiles using vacuum sampling protocols.Methods/Approach: Living Mites (LMs) or dead Mites (DMs) House Dust Mites (Dermatophagoides pteronyssinus: FERA, UK) were separately seeded onto the surfaces of Smooth Cotton, Denim and Fleece (25 mites/10x10cm2 squares) and left for 10 minutes before vacuuming. Fabrics were vacuumed (SKC Flite 2 pump) at a flow rate of 14 L/min for 60, 90 or 120 seconds and the number of mites retained by the filter (0.4Ī¼m x 37mm) unit was determined. Vacuuming was carried out in a linear direction (Protocol 1) or in a multidirectional pattern (Protocol 2). Additional fabrics with LMs were also frozen and then thawed, thereby euthanizing live mites (now termed EMs). Results/Findings: While there was significantly greater (p=0.000) recovery of mites (76% greater) in fabrics seeded with DMs than LMs irrespective of vacuuming protocol or fabric type, the efficiency of recovery of DMs (72%-76%) did not vary significantly between fabrics. For fabrics containing EMs, recovery was greatest for Smooth Cotton and Denim (65-73% recovered) and least for Fleece (15% recovered). There was no significant difference (p=0.99) between the recovery of mites across all three mite categories from Smooth Cotton and Denim but significantly fewer (p=0.000) mites were recovered from Fleece. Scanning Electron Microscopy images of HMD-seeded fabrics showed that live mites burrowed deeply into the Fleece weave which reduced their efficiency of recovery by vacuuming. Research Implications: Results presented here have implications for the recovery of HDMs by vacuuming and the choice of fabric to ameliorate HDM-dust sensitization.

Keywords: allergy, asthma, dead, fabric, fleece, live mites, sampling

Procedia PDF Downloads 135
730 Assessment of the Spatio-Temporal Distribution of Pteridium aquilinum (Bracken Fern) Invasion on the Grassland Plateau in Nyika National Park

Authors: Andrew Kanzunguze, Lusayo Mwabumba, Jason K. Gilbertson, Dominic B. Gondwe, George Z. Nxumayo

Abstract:

Knowledge about the spatio-temporal distribution of invasive plants in protected areas provides a base from which hypotheses explaining proliferation of plant invasions can be made alongside development of relevant invasive plant monitoring programs. The aim of this study was to investigate the spatio-temporal distribution of bracken fern on the grassland plateau of Nyika National Park over the past 30 years (1986-2016) as well as to determine the current extent of the invasion. Remote sensing, machine learning, and statistical modelling techniques (object-based image analysis, image classification and linear regression analysis) in geographical information systems were used to determine both the spatial and temporal distribution of bracken fern in the study area. Results have revealed that bracken fern has been increasing coverage on the Nyika plateau at an estimated annual rate of 87.3 hectares since 1986. This translates to an estimated net increase of 2,573.1 hectares, which was recorded from 1,788.1 hectares (1986) to 4,361.9 hectares (2016). As of 2017 bracken fern covered 20,940.7 hectares, approximately 14.3% of the entire grassland plateau. Additionally, it was observed that the fern was distributed most densely around Chelinda camp (on the central plateau) as well as in forest verges and roadsides across the plateau. Based on these results it is recommended that Ecological Niche Modelling approaches be employed to (i) isolate the most important factors influencing bracken fern proliferation as well as (ii) identify and prioritize areas requiring immediate control interventions so as to minimize bracken fern proliferation in Nyika National Park.

Keywords: bracken fern, image classification, Landsat-8, Nyika National Park, spatio-temporal distribution

Procedia PDF Downloads 178
729 Differences in Vitamin D Status in Caucasian and Asian Women Following Ultraviolet Radiation (UVR) Exposure

Authors: O. Hakim, K. Hart, P. McCabe, J. Berry, L. E. Rhodes, N. Spyrou, A. Alfuraih, S. Lanham-New

Abstract:

It is known that skin pigmentation reduces the penetration of ultraviolet radiation (UVR) and thus photosynthesis of 25(OH)D. However, the ethnic differences in 25(OH)D production remain to be fully elucidated. This study aimed to investigate the differences in vitamin D production between Asian and Caucasian postmenopausal women, in response to a defined, controlled UVB exposure. Seventeen women; nine white Caucasian (skin phototype II and III), eight South Asian women (skin phototype IV and V) participated in the study, acting as their controls. Three blood samples were taken for measurement of 25(OH)D during the run-in period (nine days, no sunbed exposure) after which all subjects underwent an identical UVR exposure protocol irrespective of skin colour (nine days, three sunbed sessions: 6, 8 and 8 minutes respectively with approximately 80% of body surface exposed). Skin tone was measured four times during the study. Both groups showed a gradual increase in 25(OH)D with final levels significantly higher than baseline (p<0.01). 25(OH)D concentration mean from a baseline of 43.58Ā±19.65 to 57.80Ā±17.11 nmol/l among Caucasian and from 27.03Ā±23.92 to 44.73Ā±17.74 nmol/l among Asian women. The baseline status of vitamin D was classified as deficient among the Asian women and insufficient among the Caucasian women. The percentage increase in vitamin D3 among Caucasians was 39.86% (21.02) and 207.78% (286.02) in Asian subjects respectively. This greater response to UVR exposure reflects the lower baseline levels of the Asian subjects. The mixed linear model analysis identified a significant effect of duration of UVR exposure on the production of 25(OH)D. However, the model shows no significant effect of ethnicity and skin tone on the production of 25(OH)D. These novel findings indicate that people of Asian ethnicity have the full capability to produce a similar amount of vitamin D compared to the Caucasian group; initial vitamin D concentration influences the amount of UVB needed to reach equal serum concentrations.

Keywords: ethnicity, Caucasian, South Asian, vitamin D, ultraviolet radiation, UVR

Procedia PDF Downloads 531
728 Classifying Affective States in Virtual Reality Environments Using Physiological Signals

Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley

Abstract:

Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VRā„¢ and Richieā€™s Plank Experienceā„¢) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participantsā€™ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participantsā€™ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28  4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.

Keywords: affective computing, biosignals, machine learning, stress database

Procedia PDF Downloads 138
727 The Logistics Equation and Fractal Dimension in Escalators Operations

Authors: Ali Albadri

Abstract:

The logistics equation has never been used or studied in scientific fields outside the field of ecology. It has never been used to understand the behavior of a dynamic system of mechanical machines, like an escalator. We have studied the compatibility of the logistic map against real measurements from an escalator. This study has proven that there is good compatibility between the logistics equation and the experimental measurements. It has discovered the potential of a relationship between the fractal dimension and the non-linearity parameter, R, in the logistics equation. The fractal dimension increases as the R parameter (non-linear parameter) increases. It implies that the fractal dimension increases as the phase of the life span of the machine move from the steady/stable phase to the periodic double phase to a chaotic phase. The fractal dimension and the parameter R can be used as a tool to verify and check the health of machines. We have come up with a theory that there are three areas of behaviors, which they can be classified during the life span of a machine, a steady/stable stage, a periodic double stage, and a chaotic stage. The level of attention to the machine differs depending on the stage that the machine is in. The rate of faults in a machine increases as the machine moves through these three stages. During the double period and the chaotic stages, the number of faults starts to increase and become less predictable. The rate of predictability improves as our monitoring of the changes in the fractal dimension and the parameter R improves. The principles and foundations of our theory in this work have and will have a profound impact on the design of systems, on the way of operation of systems, and on the maintenance schedules of the systems. The systems can be mechanical, electrical, or electronic. The discussed methodology in this paper will give businesses the chance to be more careful at the design stage and planning for maintenance to control costs. The findings in this paper can be implied and used to correlate the three stages of a mechanical system to more in-depth mechanical parameters like wear and fatigue life.

Keywords: logistcs map, bifurcation map, fractal dimension, logistics equation

Procedia PDF Downloads 99
726 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 161
725 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model

Authors: Donatella Giuliani

Abstract:

In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.

Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation

Procedia PDF Downloads 212
724 Finite Element Modeling and Nonlinear Analysis for Seismic Assessment of Off-Diagonal Steel Braced RC Frame

Authors: Keyvan Ramin

Abstract:

The geometric nonlinearity of Off-Diagonal Bracing System (ODBS) could be a complementary system to covering and extending the nonlinearity of reinforced concrete material. Finite element modeling is performed for flexural frame, x-braced frame and the ODBS braced frame system at the initial phase. Then the different models are investigated along various analyses. According to the experimental results of flexural and x-braced frame, the verification is done. Analytical assessments are performed in according to three-dimensional finite element modeling. Non-linear static analysis is considered to obtain performance level and seismic behavior, and then the response modification factors calculated from each modelā€™s pushover curve. In the next phase, the evaluation of cracks observed in the finite element models, especially for RC members of all three systems is performed. The finite element assessment is performed on engendered cracks in ODBS braced frame for various time steps. The nonlinear dynamic time history analysis accomplished in different stories models for three records of Elcentro, Naghan, and Tabas earthquake accelerograms. Dynamic analysis is performed after scaling accelerogram on each type of flexural frame, x-braced frame and ODBS braced frame one by one. The base-point on RC frame is considered to investigate proportional displacement under each record. Hysteresis curves are assessed along continuing this study. The equivalent viscous damping for ODBS system is estimated in according to references. Results in each section show the ODBS system has an acceptable seismic behavior and their conclusions have been converged when the ODBS system is utilized in reinforced concrete frame.

Keywords: FEM, seismic behaviour, pushover analysis, geometric nonlinearity, time history analysis, equivalent viscous damping, passive control, crack investigation, hysteresis curve

Procedia PDF Downloads 377
723 SavinglifeĀ®: An Educational Technology for Basic and Advanced Cardiovascular Life Support

Authors: Naz Najma, Grace T. M. Dal Sasso, Maria de Lourdes de Souza

Abstract:

The development of information and communication technologies and the accessibility of mobile devices has increased the possibilities of the teaching and learning process anywhere and anytime. Mobile and web application allows the production of constructive teaching and learning models in various educational settings, showing the potential for active learning in nursing. The objective of this study was to present the development of an educational technology (SavinglifeĀ®, an app) for learning cardiopulmonary resuscitation and advanced cardiovascular life support training. SavinglifeĀ® is a technological production, based on the concept of virtual learning and problem-based learning approach. The study was developed from January 2016 to November 2016, using five phases (analyze, design, develop, implement, evaluate) of the instructional systems development process. The technology presented 10 scenarios and 12 simulations, covering different aspects of basic and advanced cardiac life support. The contents can be accessed in a non-linear way leaving the students free to build their knowledge based on their previous experience. Each scenario is presented through interactive tools such as scenario description, assessment, diagnose, intervention and reevaluation. Animated ECG rhythms, text documents, images and videos are provided to support procedural and active learning considering real life situation. Accessible equally on small to large devices with or without an internet connection, SavinglifeĀ® offers a dynamic, interactive and flexible tool, placing students at the center of the learning process. SavinglifeĀ® can contribute to the studentā€™s learning in the assessment and management of basic and advanced cardiac life support in a safe and ethical way.

Keywords: problem-based learning, cardiopulmonary resuscitation, nursing education, advanced cardiac life support, educational technology

Procedia PDF Downloads 301
722 A Hybrid Block Multistep Method for Direct Numerical Integration of Fourth Order Initial Value Problems

Authors: Adamu S. Salawu, Ibrahim O. Isah

Abstract:

Direct solution to several forms of fourth-order ordinary differential equations is not easily obtained without first reducing them to a system of first-order equations. Thus, numerical methods are being developed with the underlying techniques in the literature, which seeks to approximate some classes of fourth-order initial value problems with admissible error bounds. Multistep methods present a great advantage of the ease of implementation but with a setback of several functions evaluation for every stage of implementation. However, hybrid methods conventionally show a slightly higher order of truncation for any k-step linear multistep method, with the possibility of obtaining solutions at off mesh points within the interval of solution. In the light of the foregoing, we propose the continuous form of a hybrid multistep method with Chebyshev polynomial as a basis function for the numerical integration of fourth-order initial value problems of ordinary differential equations. The basis function is interpolated and collocated at some points on the interval [0, 2] to yield a system of equations, which is solved to obtain the unknowns of the approximating polynomial. The continuous form obtained, its first and second derivatives are evaluated at carefully chosen points to obtain the proposed block method needed to directly approximate fourth-order initial value problems. The method is analyzed for convergence. Implementation of the method is done by conducting numerical experiments on some test problems. The outcome of the implementation of the method suggests that the method performs well on problems with oscillatory or trigonometric terms since the approximations at several points on the solution domain did not deviate too far from the theoretical solutions. The method also shows better performance compared with an existing hybrid method when implemented on a larger interval of solution.

Keywords: Chebyshev polynomial, collocation, hybrid multistep method, initial value problems, interpolation

Procedia PDF Downloads 120
721 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India

Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah

Abstract:

Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.

Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method

Procedia PDF Downloads 235
720 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer

Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin

Abstract:

Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use ā€œeye ballingā€ method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 Āµm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVueā„¢ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mmĀ² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.

Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes

Procedia PDF Downloads 112
719 Discriminant Shooting-Related Statistics between Winners and Losers 2023 FIBA U19 Basketball World Cup

Authors: Navid Ebrahmi Madiseh, Sina Esfandiarpour-Broujeni, Rahil Razeghi

Abstract:

Introduction: Quantitative analysis of game-related statistical parameters is widely used to evaluate basketball performance at both individual and team levels. Non-free throw shooting plays a crucial role as the primary scoring method, holding significant importance in the game's technical aspect. It has been explored the predictive value of game-related statistics in relation to various contextual and situational variables. Many similarities and differences also have been found between different age groups and levels of competition. For instance, in the World Basketball Championships after the 2010 rule change, 2-point field goals distinguished winners from losers in women's games but not in men's games, and the impact of successful 3-point field goals on women's games was minimal. The study aimed to identify and compare discriminant shooting-related statistics between winning and losing teams in menā€™s and womenā€™s FIBA-U19-Basketball-World-Cup-2023 tournaments. Method: Data from 112 observations (2 per game) of 16 teams (for each gender) in the FIBA-U19-Basketball-World-Cup-2023 were selected as samples. The data were obtained from the official FIBA website using Python. Specific information was extracted, organized into a DataFrame, and consisted of twelve variables, including shooting percentages, attempts, and scoring ratio for 3-pointers, mid-range shots, paint shots, and free throws. Made% = scoring type successful attempts/scoring type total attemptsĀ¬ (1)Free-throw-pts% (free throw score ratio) = (free throw score/total score) Ɨ100 (2)Mid-pts% (mid-range score ratio) = (mid-range score/total score) Ɨ100 (3) Paint-pts% (paint score ratio) = (Paint score/total score) Ɨ100 (4) 3p_pts% (three-point score ratio) = (three-point score/total score) Ɨ100 (5) Independent t-tests were used to examine significant differences in shooting-related statistical parameters between winning and losing teams for both genders. Statistical significance was p < 0.05. All statistical analyses were completed with SPSS, Version 18. Results: The results showed that 3p-made%, mid-pts%, paint-made%, paint-pts%, mid-attempts, and paint-attempts were significantly different between winners and losers in men (t=-3.465, P<0.05; t=3.681, P<0.05; t=-5.884, P<0.05; t=-3.007, P<0.05; t=2.549, p<0.05; t=-3.921, P<0.05). For women, significant differences between winners and losers were found for 3p-made%, 3p-pts%, paint-made%, and paint-attempt (t=-6.429, P<0.05; t=-1.993, P<0.05; t=-1.993, P<0.05; t=-4.115, P<0.05; t=02.451, P<0.05). Discussion: The research aimed to compare shooting-related statistics between winners and losers in men's and women's teams at the FIBA-U19-Basketball-World-Cup-2023. Results indicated that men's winners excelled in 3p-made%, paint-made%, paint-pts%, paint-attempts, and mid-attempt, consistent with previous studies. This study found that losers in menā€™s teams had higher mid-pts% than winners, which was inconsistent with previous findings. It has been indicated that winners tend to prioritize statistically efficient shots while forcing the opponent to take mid-range shots. In women's games, significant differences in 3p-made%, 3p-pts%, paint-made%, and paint-attempts were observed, indicating that winners relied on riskier outside scoring strategies. Overall, winners exhibited higher accuracy in paint and 3P shooting than losers, but they also relied more on outside offensive strategies. Additionally, winners acquired a higher ratio of their points from 3P shots, which demonstrates their confidence in their skills and willingness to take risks at this competitive level.

Keywords: gender, losers, shoot-statistic, U19, winners

Procedia PDF Downloads 94
718 Analysis of Dynamics Underlying the Observation Time Series by Using a Singular Spectrum Approach

Authors: O. Delage, H. Bencherif, T. Portafaix, A. Bourdier

Abstract:

The main purpose of time series analysis is to learn about the dynamics behind some time ordered measurement data. Two approaches are used in the literature to get a better knowledge of the dynamics contained in observation data sequences. The first of these approaches concerns time series decomposition, which is an important analysis step allowing patterns and behaviors to be extracted as components providing insight into the mechanisms producing the time series. As in many cases, time series are short, noisy, and non-stationary. To provide components which are physically meaningful, methods such as Empirical Mode Decomposition (EMD), Empirical Wavelet Transform (EWT) or, more recently, Empirical Adaptive Wavelet Decomposition (EAWD) have been proposed. The second approach is to reconstruct the dynamics underlying the time series as a trajectory in state space by mapping a time series into a set of Rįµ lag vectors by using the method of delays (MOD). Takens has proved that the trajectory obtained with the MOD technic is equivalent to the trajectory representing the dynamics behind the original time series. This work introduces the singular spectrum decomposition (SSD), which is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components. This method takes its origin from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for the analysis and prediction of time series. As the first step of SSD is to constitute a trajectory matrix by embedding a one-dimensional time series into a set of lagged vectors, SSD can also be seen as a reconstruction method like MOD. We will first give a brief overview of the existing decomposition methods (EMD-EWT-EAWD). The SSD method will then be described in detail and applied to experimental time series of observations resulting from total columns of ozone measurements. The results obtained will be compared with those provided by the previously mentioned decomposition methods. We will also compare the reconstruction qualities of the observed dynamics obtained from the SSD and MOD methods.

Keywords: time series analysis, adaptive time series decomposition, wavelet, phase space reconstruction, singular spectrum analysis

Procedia PDF Downloads 102
717 Assessing the Accessibility to Primary Percutaneous Coronary Intervention

Authors: Tzu-Jung Tseng, Pei-Hsuen Han, Tsung-Hsueh Lu

Abstract:

Background: Ensuring patients with ST-elevation myocardial infarction (STEMI) access to hospitals that could perform percutaneous coronary intervention (PCI) in time is an important concern of healthcare managers. One commonly used the method to assess the coverage of population access to PCI hospital is the use GIS-estimated linear distance (crow's fly distance) between the district centroid and the nearest PCI hospital. If the distance is within a given distance (such as 20 km), the entire population of that district is considered to have appropriate access to PCI. The premise of using district centroid to estimate the coverage of population resident in that district is that the people live in the district are evenly distributed. In reality, the population density is not evenly distributed within the administrative district, especially in rural districts. Fortunately, the Taiwan government released basic statistical area (on average 450 population within the area) recently, which provide us an opportunity to estimate the coverage of population access to PCI services more accurate. Objectives: We aimed in this study to compare the population covered by a give PCI hospital according to traditional administrative district versus basic statistical area. We further examined if the differences between two geographic units used would be larger in a rural area than in urban area. Method: We selected two hospitals in Tainan City for this analysis. Hospital A is in urban area, hospital B is in rural area. The population in each traditional administrative district and basic statistical area are obtained from Taiwan National Geographic Information System, Ministry of Internal Affairs. Results: Estimated population live within 20 km of hospital A and B was 1,515,846 and 323,472 according to traditional administrative district and was 1,506,325 and 428,556 according to basic statistical area. Conclusion: In urban area, the estimated access population to PCI services was similar between two geographic units. However, in rural areas, the access population would be overestimated.

Keywords: accessibility, basic statistical area, modifiable areal unit problem (MAUP), percutaneous coronary intervention (PCI)

Procedia PDF Downloads 455
716 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, JosƩ Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authorsā€™ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, SĆ£o Paulo State University, Brazil. The tool for automatic MS quantification, developed in MatlabĀ®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Blandā€“Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authorsā€™ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, SĆ£o Paulo State University, Brazil. The tool for automatic MS quantification, developed in MatlabĀ®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Blandā€“Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 501
715 Climate Variability and Its Impacts on Rice (Oryza sativa) Productivity in Dass Local Government Area of Bauchi State, Nigeria

Authors: Auwal Garba, Rabiu Maijamaā€™a, Abdullahi Muhammad Jalam

Abstract:

Variability in climate has affected the agricultural production all over the globe. This concern has motivated important changes in the field of research during the last decade. Climate variability is believed to have declining effects towards rice production in Nigeria. This study examined climate variability and its impact on rice productivity in Dass Local Government Area, Bauchi State, by employing Linear Trend Model (LTM), analysis of variance (ANOVA) and regression analysis. Annual seasonal data of the climatic variables for temperature (min. and max), rainfall, and solar radiation from 1990 to 2015 were used. Results confirmed that 74.4% of the total variation in rice yield in the study area was explained by the changes in the independent variables. That is to say, temperature (minimum and maximum), rainfall, and solar radiation explained rice yield with 74.4% in the study area. Rising mean maximum temperature would lead to reduction in rice production while moderate increase in mean minimum temperature would be advantageous towards rice production, and the persistent rise in the mean maximum temperature, in the long run, will have more negatively affect rice production in the future. It is, therefore, important to promote agro-meteorological advisory services, which will be useful in farm planning and yield sustainability. Closer collaboration among the meteorologist and agricultural scientist is needed to increase the awareness about the existing database, crop weather models among others, with a view to reaping the full benefits of research on specific problems and sustainable yield management and also there should be a special initiative by the ADPs (State Agricultural Development Programme) towards promoting best agricultural practices that are resilient to climate variability in rice production and yield sustainability.

Keywords: climate variability, impact, productivity, rice

Procedia PDF Downloads 99
714 Seismic Protection of Automated Stocker System by Customized Viscous Fluid Dampers

Authors: Y. P. Wang, J. K. Chen, C. H. Lee, G. H. Huang, M. C. Wang, S. W. Chen, Y. T. Kuan, H. C. Lin, C. Y. Huang, W. H. Liang, W. C. Lin, H. C. Yu

Abstract:

The hi-tech industries in the Science Park at southern Taiwan were heavily damaged by a strong earthquake early 2016. The financial loss in this event was attributed primarily to the automated stocker system handling fully processed products, and recovery of the automated stocker system from the aftermath proved to contribute major lead time. Therefore, development of effective means for protection of stockers against earthquakes has become the highest priority for risk minimization and business continuity. This study proposes to mitigate the seismic response of the stockers by introducing viscous fluid dampers in between the ceiling and the top of the stockers. The stocker is expected to vibrate less violently with a passive control force on top. Linear damper is considered in this application with an optimal damping coefficient determined from a preliminary parametric study. The damper is small in size in comparison with those adopted for building or bridge applications. Component test of the dampers has been carried out to make sure they meet the design requirement. Shake table tests have been further conducted to verify the proposed scheme under realistic earthquake conditions. Encouraging results have been achieved by effectively reducing the seismic responses of up to 60% and preventing the FOUPs from falling off the shelves that would otherwise be the case if left unprotected. Effectiveness of adopting a viscous fluid damper for seismic control of the stocker on top against the ceiling has been confirmed. This technique has been adopted by Macronix International Co., LTD for seismic retrofit of existing stockers. Demonstrative projects on the application of the proposed technique are planned underway for other companies in the display industry as well.

Keywords: hi-tech industries, seismic protection, automated stocker system, viscous fluid damper

Procedia PDF Downloads 349
713 Deriving an Index of Adoption Rate and Assessing Factors Affecting Adoption of an Agroforestry-Based Farming System in Dhanusha District, Nepal

Authors: Arun Dhakal, Geoff Cockfield, Tek Narayan Maraseni

Abstract:

This paper attempts to fulfil the gap in measuring adoption in agroforestry studies. It explains the derivation of an index of adoption rate in a Nepalese context and examines the factors affecting adoption of agroforestry-based land management practice (AFLMP) in the Dhanusha District of Nepal. Data about the different farm practices and the factors (bio-physical, socio-economic) influencing adoption were collected during focus group discussion and from the randomly selected households using a household survey questionnaire, respectively. A multivariate regression model was used to determine the factors. The factors (variables) found to significantly affect adoption of AFLMP were: farm size, availability of irrigation water, education of household heads, agricultural labour force, frequency of visits by extension workers, expenditure on farm inputs purchase, householdā€™s experience in agroforestry, and distance from home to government forest. The regression model explained about 75% of variation in adoption decision. The model rejected ā€˜erosion hazardā€™, ā€˜flood hazardā€™ and ā€˜genderā€™ as determinants of adoption, which in case of single agroforestry practice were major variables and played positive role. Out of eight variables, farm size played the most powerful role in explaining the variation in adoption, followed by availability of irrigation water and education of household heads. The results of this study suggest that policies to promote the provision of irrigation water, extension services and motivation to obtaining higher education would probably provide the incentive to adopt agroforestry elsewhere in the terai of Nepal.

Keywords: agroforestry, adoption index, determinants of adoption, step-wise linear regression, Nepal

Procedia PDF Downloads 498
712 Use of FWD in Determination of Bonding Condition of Semi-Rigid Asphalt Pavement

Authors: Nonde Lushinga, Jiang Xin, Danstan Chiponde, Lawrence P. Mutale

Abstract:

In this paper, falling weight deflectometer (FWD) was used to determine the bonding condition of a newly constructed semi-rigid base pavement. Using Evercal back-calculation computer programme, it was possible to quickly and accurately determine the structural condition of the pavement system of FWD test data. The bonding condition of the pavement layers was determined from calculated shear stresses and strains (relative horizontal displacements) on the interface of pavement layers from BISAR 3.0 pavement computer programmes. Thus, by using non-linear layered elastic theory, a pavement structure is analysed in the same way as other civil engineering structures. From non-destructive FWD testing, the required bonding condition of pavement layers was quantified from soundly based principles of Goodmanā€™s constitutive models shown in equation 2, thereby producing the shear reaction modulus (Ks) which gives an indication of bonding state of pavement layers. Furthermore, a Tack coat failure Ratio (TFR) which has long being used in the USA in pavement evaluation was also used in the study in order to give validity to the study. According to research [39], the interface between two asphalt layers is determined by use of Tack Coat failure Ratio (TFR) which is the ratio of the stiffness of top layer asphalt layers over the stiffness of the second asphalt layer (E1/E2) in a slipped pavement. TFR gives an indication of the strength of the tack coat which is the main determinants of interlayer slipping. The criteria is that if the interface was in the state full bond, TFR would be greater or equals to 1 and that if the TFR was 0, meant full slip. Results of the calculations showed that TFR value was 1.81 which re-affirmed the position that the pavement under study was in the state of full bond because the value was greater than 1. It was concluded that FWD can be used to determine bonding condition of existing and newly constructed pavements.

Keywords: falling weight deflectometer (FWD), backcaluclation, semi-rigid base pavement, shear reaction modulus

Procedia PDF Downloads 512
711 Conception of Increasing the Efficiency of Excavation Shoring by Prestressing Diaphragm Walls

Authors: Mateusz Frydrych

Abstract:

The construction of diaphragm walls as excavation shoring as well as part of deep foundations is widely used in geotechnical engineering. Today's design challenges lie in the optimal dimensioning of the cross-section, which is demanded by technological considerations. Also in force is the issue of optimization and sustainable use of construction materials, including reduction of carbon footprint, which is currently a relevant challenge for the construction industry. The author presents the concept of an approach to achieving increased efficiency of diaphragm wall excavation shoring by using structural compression technology. The author proposes to implement prestressed tendons in a non-linear manner in the reinforcement cage. As a result bending moment is reduced, which translates into a reduction in the amount of steel needed in the section, a reduction in displacements, and a reduction in the scratching of the casing, including the achievement of better tightness. This task is rarely seen and has not yet been described in a scientific way in the literature. The author has developed a dynamic numerical model that allows the dimensioning of the cross-section of a prestressed shear wall, as well as the study of casing displacements and cross-sectional forces in any defined computational situation. Numerical software from the Sofistik - open source development environment - was used for the study, and models were validated in Plaxis software . This is an interesting idea that allows for optimizing the execution of construction works and reducing the required resources by using fewer materials and saving time. The author presents the possibilities of a prestressed diaphragm wall, among others, using. The example of a diaphragm wall working as a cantilever at the height of two underground floors without additional strutting or stability protection by using ground anchors. This makes the execution of the work more criminal for the contractor and, as a result, cheaper for the investor.

Keywords: prestressed diaphragm wall, Plaxis, Sofistik, innovation, FEM, optimisation

Procedia PDF Downloads 72