Search results for: dimensional accuracy
1065 Usability Evaluation of Rice Doctor as a Diagnostic Tool for Agricultural Extension Workers in Selected Areas in the Philippines
Authors: Jerome Cayton Barradas, Rowely Parico, Lauro Atienza, Poornima Shankar
Abstract:
The effective agricultural extension is essential in facilitating improvements in various agricultural areas. One way of doing this is through Information and communication technologies (ICTs) like Rice Doctor (RD), an app-based diagnostic tool that provides accurate and timely diagnosis and management recommendations for more than 80 crop problems. This study aims to evaluate the RD usability by determining the effectiveness, efficiency, and user satisfaction of RD in making an accurate and timely diagnosis. It also aims to identify other factors that affect RD usability. This will be done by comparing RD with two other diagnostic methods: visual identification-based diagnosis and reference-guided diagnosis. The study was implemented in three rice-producing areas and has involved 96 extension workers. Respondents accomplished a self-administered survey and participated in group discussions. Data collected was then subjected to qualitative and quantitative analysis. Most of the respondents were satisfied with RD and believed that references are needed in assuring the accuracy of diagnosis. The majority found it efficient and easy to use. Some found it confusing and complicated, but this is because of their unfamiliarity with RD. Most users were also able to achieve accurate diagnosis proving effectiveness. Lastly, although users have reservations, they are satisfied and open to using RD. The study also found out the importance of visual identification skills in using RD and the need for capacity development and improvement of access to RD devices. From these results, the following are recommended to improve RD usability: review and upgrade diagnostic keys, expand further RD content, initiate capacity development for AEWs, and prepare and implement an RD communication plan.Keywords: agricultural extension, crop protection, information and communication technologies, rice doctor
Procedia PDF Downloads 2581064 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 4501063 Sensory Ethnography and Interaction Design in Immersive Higher Education
Authors: Anna-Kaisa Sjolund
Abstract:
The doctoral thesis examines interaction design and sensory ethnography as tools to create immersive education environments. In recent years, there has been increasing interest and discussions among researchers and educators on immersive education like augmented reality tools, virtual glasses and the possibilities to utilize them in education at all levels. Using virtual devices as learning environments it is possible to create multisensory learning environments. Sensory ethnography in this study refers to the way of the senses consider the impact on the information dynamics in immersive learning environments. The past decade has seen the rapid development of virtual world research and virtual ethnography. Christine Hine's Virtual Ethnography offers an anthropological explanation of net behavior and communication change. Despite her groundbreaking work, time has changed the users’ communication style and brought new solutions to do ethnographical research. The virtual reality with all its new potential has come to the fore and considering all the senses. Movie and image have played an important role in cultural research for centuries, only the focus has changed in different times and in a different field of research. According to Karin Becker, the role of image in our society is information flow and she found two meanings what the research of visual culture is. The images and pictures are the artifacts of visual culture. Images can be viewed as a symbolic language that allows digital storytelling. Combining the sense of sight, but also the other senses, such as hear, touch, taste, smell, balance, the use of a virtual learning environment offers students a way to more easily absorb large amounts of information. It offers also for teachers’ different ways to produce study material. In this article using sensory ethnography as research tool approaches the core question. Sensory ethnography is used to describe information dynamics in immersive environment through interaction design. Immersive education environment is understood as three-dimensional, interactive learning environment, where the audiovisual aspects are central, but all senses can be taken into consideration. When designing learning environments or any digital service, interaction design is always needed. The question what is interaction design is justified, because there is no simple or consistent idea of what is the interaction design or how it can be used as a research method or whether it is only a description of practical actions. When discussing immersive learning environments or their construction, consideration should be given to interaction design and sensory ethnography.Keywords: immersive education, sensory ethnography, interaction design, information dynamics
Procedia PDF Downloads 1381062 Computation of Residual Stresses in Human Face Due to Growth
Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan
Abstract:
Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.Keywords: growth, soft tissue, residual stress, finite element method
Procedia PDF Downloads 3561061 An investigation of the High-frequency Isolation Performance of Quasi-Zero-Stiffness Vibration Isolators
Authors: Chen Zhang, Yongpeng Gu, Xiaotian Li
Abstract:
Quasi-zero-stiffness (QZS) vibration isolation technology has garnered significant attention in both academia and industry, which enables ultra-low-frequency vibration isolation. In modern industries, such as shipbuilding and aerospace, rotating machinery generates vibrations over a wide frequency range, thus imposing more stringent requirements on vibration isolation technologies. These technologies must not only achieve ultra-low starting isolation frequencies but also provide effective isolation across mid- to high-frequency ranges. However, existing research on QZS vibration isolators primarily focuses on frequency ranges below 50 Hz. Moreover, studies have shown that in the mid-to high-frequency ranges, QZS isolators tend to generate resonance peaks that adversely affect their isolation performance. This limitation significantly restricts the practical applicability of QZS isolation technology. To address this issue, the present study investigates the high-frequency isolation performance of two typical QZS isolators: the mechanism type three-spring QZS isolator mechanism and the structure and bowl-shaped QZS isolator structure. First, the parameter conditions required to achieve quasi-zero stiffness characteristics for two isolators are derived based on static mechanical analysis. The theoretical transmissibility characteristics are then calculated using the harmonic balance method. Three-dimensional finite element models of two QZS isolators are developed using ABAQUS simulation software, and transmissibility curves are computed for the 0-500 Hz frequency range. The results indicate that the three-spring QZS mechanism exhibits multiple higher-order resonance peaks in the mid-to high-frequency ranges due to the higher-order models of the springs. Springs with fewer coils and larger diameters can shift the higher-order modals to higher frequencies but cannot entirely eliminate their occurrence. In contrast, the bowl-shaped QZS isolator, through shape optimization using a spline-based representation, effectively mitigates the generation of higher-order resonance modes, resulting in superior isolation performance in the mid-to high-frequency ranges. This study provides essential theoretical insights for optimizing the vibration isolation performance of QZS technologies in complex, wide-frequency vibration environments, offering significant practical value for their application.Keywords: quasi-zero-stiffness, wide-frequency vibration, vibration isolator, transmissibility
Procedia PDF Downloads 121060 Automation of Pneumatic Seed Planter for System of Rice Intensification
Authors: Tukur Daiyabu Abdulkadir, Wan Ishak Wan Ismail, Muhammad Saufi Mohd Kassim
Abstract:
Seed singulation and accuracy in seed spacing are the major challenges associated with the adoption of mechanical seeder for system of rice intensification. In this research the metering system of a pneumatic planter was modified and automated for increase precision to meet the demand of system of rice intensification SRI. The chain and sprocket mechanism of a conventional vacuum planter were now replaced with an electro mechanical system made up of a set of servo motors, limit switch, micro controller and a wheel divided into 10 equal angles. The circumference of the planter wheel was determined based on which seed spacing was computed and mapped to the angles of the metering wheel. A program was then written and uploaded to arduino micro controller and it automatically turns the seed plates for seeding upon covering the required distance. The servo motor was calibrated with the aid of labVIEW. The machine was then calibrated using a grease belt and varying the servo rpm through voltage variation between 37 rpm to 47 rpm until an optimum value of 40 rpm was obtained with a forward speed of 5 kilometers per hour. A pressure of 1.5 kpa was found to be optimum under which no skip or double was recorded. Precision in spacing (coefficient of variation), miss index, multiple index, doubles and skips were investigated. No skip or double was recorded both at laboratory and field levels. The operational parameters under consideration were both evaluated at laboratory and field. Even though there was little variation between the laboratory and field values of precision in spacing, multiple index and miss index, the different is not significant as both laboratory and field values fall within the acceptable range.Keywords: automation, calibration, pneumatic seed planter, system of rice intensification
Procedia PDF Downloads 6451059 Natural Gas Flow Optimization Using Pressure Profiling and Isolation Techniques
Authors: Syed Tahir Shah, Fazal Muhammad, Syed Kashif Shah, Maleeha Gul
Abstract:
In recent days, natural gas has become a relatively clean and quality source of energy, which is recovered from deep wells by expensive drilling activities. The recovered substance is purified by processing in multiple stages to remove the unwanted/containments like dust, dirt, crude oil and other particles. Mostly, gas utilities are concerned with essential objectives of quantity/quality of natural gas delivery, financial outcome and safe natural gas volumetric inventory in the transmission gas pipeline. Gas quantity and quality are primarily related to standards / advanced metering procedures in processing units/transmission systems, and the financial outcome is defined by purchasing and selling gas also the operational cost of the transmission pipeline. SNGPL (Sui Northern Gas Pipelines Limited) Pakistan has a wide range of diameters of natural gas transmission pipelines network of over 9125 km. This research results in answer a few of the issues in accuracy/metering procedures via multiple advanced gadgets for gas flow attributes after being utilized in the transmission system and research. The effects of good pressure management in transmission gas pipeline network in contemplation to boost the gas volume deposited in the existing network and finally curbing gas losses UFG (Unaccounted for gas) for financial benefits. Furthermore, depending on the results and their observation, it is directed to enhance the maximum allowable working/operating pressure (MAOP) of the system to 1235 PSIG from the current round about 900 PSIG, such that the capacity of the network could be entirely utilized. In gross, the results depict that the current model is very efficient and provides excellent results in the minimum possible time.Keywords: natural gas, pipeline network, UFG, transmission pack, AGA
Procedia PDF Downloads 951058 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue
Authors: Rachel Y. Zhang, Christopher K. Anderson
Abstract:
A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine
Procedia PDF Downloads 1341057 Understanding Chromosome Movement in Starfish Oocytes
Authors: Bryony Davies
Abstract:
Many cell and tissue culture practices ignore the effects of gravity on cell biology, and little is known about how cell components may move in response to gravitational forces. Starfish oocytes provide an excellent model for interrogating the movement of cell components due to their unusually large size, ease of handling, and high transparency. Chromosomes from starfish oocytes can be visualised by microinjection of the histone-H2B-mCherry plasmid into the oocytes. The movement of the chromosomes can then be tracked by live-cell fluorescence microscopy. The results from experiments using these methods suggest that there is a replicable downward movement of centrally located chromosomes at a median velocity of 0.39 μm/min. Chromosomes nearer the nuclear boundary showed more restricted movement. Chromosome density and shape could also be altered by microinjection of restriction enzymes, primarily Alu1, before imaging. This was found to alter the speed of chromosome movement, with chromosomes from Alu1-injected nuclei showing a median downward velocity of 0.60 μm/min. Overall, these results suggest that there is a non-negligible movement of chromosomes in response to gravitational forces and that this movement can be altered by enzyme activity. Future directions based on these results could interrogate if this observed downward movement extends to other cell components and to other cell types. Additionally, it may be important to understand whether gravitational orientation and vertical positioning of cell components alter cell behaviour. The findings here may have implications for current cell culture practices, which do not replicate cell orientations or external forces experienced in vivo. It is possible that a failure to account for gravitational forces in 2D cell culture alters experimental results and the accuracy of conclusions drawn from them. Understanding possible behavioural changes in cells due to the effects of gravity would therefore be beneficial.Keywords: starfish, oocytes, live-cell imaging, microinjection, chromosome dynamics
Procedia PDF Downloads 1041056 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 951055 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 2691054 The Effects of Acute Physical Activity on Measures of Inhibition in Pre-School Children
Authors: Antonia Stergiou
Abstract:
Background: Due to the developmental trajectory of executive function in preschool age, the majority of existing studies investigating the association between acute physical activity and cognitive control have focused on adolescents and adult population. Aim- The aim of this study was to investigate the possible effects of physical activity on the inhibitory control of pre-school children. Methods: This is a prospectively designed study that was conducted in a primary school in Bristol in June 2015. The total number of subjects was n=61 and 20 trials of a modified Eriksen Flanker Task were completed before and after a 30-minutes session of moderate exercise (including both 5 minutes of warm up and cool down). For each test a pre- and post-test assessment took place that included both congruent and incongruent trials. The congruent trials were considered as the control condition and the incongruent trials as those that measure inhibitory control (experimental condition). At the end of the assessment, the participants were instructed to choose the face that described their current feelings between three options (happy, neutral, sad). Results: There was a trend for increased accuracy following moderate exercise, but there was statistical significance (p > .05). However, there was statistically significant improvement in the reaction time following the same type of exercise (p = .005). Face board assessment revealed positive emotions after 30 minutes of moderate exercise. Conclusions: The current study supports findings from previous studies related to the benefits of physical activity on the children’s inhibitory control and provides evidence of those benefits in even younger ages. Further research should take place considering each child individually. Implementation of those findings could result in an improved curriculum in schools with additional time spent on physical education courses.Keywords: cognitive control, inhibition, physical activity, pre-school children
Procedia PDF Downloads 2571053 Classification of Emotions in Emergency Call Center Conversations
Authors: Magdalena Igras, Joanna Grzybowska, Mariusz Ziółko
Abstract:
The study of emotions expressed in emergency phone call is presented, covering both statistical analysis of emotions configurations and an attempt to automatically classify emotions. An emergency call is a situation usually accompanied by intense, authentic emotions. They influence (and may inhibit) the communication between caller and responder. In order to support responders in their responsible and psychically exhaustive work, we studied when and in which combinations emotions appeared in calls. A corpus of 45 hours of conversations (about 3300 calls) from emergency call center was collected. Each recording was manually tagged with labels of emotions valence (positive, negative or neutral), type (sadness, tiredness, anxiety, surprise, stress, anger, fury, calm, relief, compassion, satisfaction, amusement, joy) and arousal (weak, typical, varying, high) on the basis of perceptual judgment of two annotators. As we concluded, basic emotions tend to appear in specific configurations depending on the overall situational context and attitude of speaker. After performing statistical analysis we distinguished four main types of emotional behavior of callers: worry/helplessness (sadness, tiredness, compassion), alarm (anxiety, intense stress), mistake or neutral request for information (calm, surprise, sometimes with amusement) and pretension/insisting (anger, fury). The frequency of profiles was respectively: 51%, 21%, 18% and 8% of recordings. A model of presenting the complex emotional profiles on the two-dimensional (tension-insecurity) plane was introduced. In the stage of acoustic analysis, a set of prosodic parameters, as well as Mel-Frequency Cepstral Coefficients (MFCC) were used. Using these parameters, complex emotional states were modeled with machine learning techniques including Gaussian mixture models, decision trees and discriminant analysis. Results of classification with several methods will be presented and compared with the state of the art results obtained for classification of basic emotions. Future work will include optimization of the algorithm to perform in real time in order to track changes of emotions during a conversation.Keywords: acoustic analysis, complex emotions, emotion recognition, machine learning
Procedia PDF Downloads 3991052 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 3171051 Structural Behavior of Precast Foamed Concrete Sandwich Panel Subjected to Vertical In-Plane Shear Loading
Authors: Y. H. Mugahed Amran, Raizal S. M. Rashid, Farzad Hejazi, Nor Azizi Safiee, A. A. Abang Ali
Abstract:
Experimental and analytical studies were accomplished to examine the structural behavior of precast foamed concrete sandwich panel (PFCSP) under vertical in-plane shear load. PFCSP full-scale specimens with total number of six were developed with varying heights to study an important parameter slenderness ratio (H/t). The production technique of PFCSP and the procedure of test setup were described. The results obtained from the experimental tests were analysed in the context of in-plane shear strength capacity, load-deflection profile, load-strain relationship, slenderness ratio, shear cracking patterns and mode of failure. Analytical study of finite element analysis was implemented and the theoretical calculations of the ultimate in-plane shear strengths using the adopted ACI318 equation for reinforced concrete wall were determined aimed at predicting the in-plane shear strength of PFCSP. The decrease in slenderness ratio from 24 to 14 showed an increase of 26.51% and 21.91% on the ultimate in-plane shear strength capacity as obtained experimentally and in FEA models, respectively. The experimental test results, FEA models data and theoretical calculation values were compared and provided a significant agreement with high degree of accuracy. Therefore, on the basis of the results obtained, PFCSP wall has the potential use as an alternative to the conventional load-bearing wall system.Keywords: deflection curves, foamed concrete (FC), load-strain relationships, precast foamed concrete sandwich panel (PFCSP), slenderness ratio, vertical in-plane shear strength capacity
Procedia PDF Downloads 2201050 Towards a Sustainable High Population Density Urban Intertextuality – Program Re-Configuration Integrated Urban Design Study in Hangzhou, China
Abstract:
By the end of 2014, China has an urban population of 749 million, reaching the urbanization rate of 54.77%. Dense and vertical urban structure has become a common choice for China and most of the densely populated Asian countries for sustainable development. This paper focuses on the most conspicuous urban change period in China, from 2000 to 2010, during which China's population shifted the fastest from rural region to cities. On one hand, the 200 million nationwide "new citizen" along with the 456 million "old citizen" explored in the new-century city for new urban lifestyle and livable built environment; On the other hand, however, large-scale rapid urban constructions are confined to the methods of traditional two-dimensional architectural thinking. Human-oriented design and system thinking have been missing in this intricate postmodern urban condition. This phenomenon, especially the gap and spark between the solid, huge urban physical system and the rich, subtle everyday urban life, will be studied in depth: How the 20th-century high-rise residential building "spontaneously" turned into an old but expensive multi-functional high-rise complex in the 21st century city center; how 21st century new/late 20th century old public buildings with the same function integrated their different architectural forms into the new / old city center? Finally the paper studies cases in Hangzhou: 1) Function Evolve–downtown high-rise residential building “International Garden” and “Zhongshan Garden” (1999). 2) Form Compare–Hangzhou Theater (1998) vs Hangzhou Grand Theatre (2004), Hangzhou City Railway Station (1999) vs Hangzhou East Railway Station (2013). The research aims at the exploring the essence of city from the building form dispel and urban program re-configuration approach, gaining a better consideration of human behavior through compact urban design effort for improving urban intertextuality, searching for a sustainable development path in the crucial time of urban population explosion in China.Keywords: architecture form dispel, compact urban design, urban intertextuality, urban program re-configuration
Procedia PDF Downloads 5001049 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing
Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti
Abstract:
Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis
Procedia PDF Downloads 1371048 Optimum Method to Reduce the Natural Frequency for Steel Cantilever Beam
Authors: Eqqab Maree, Habil Jurgen Bast, Zana K. Shakir
Abstract:
Passive damping, once properly characterized and incorporated into the structure design is an autonomous mechanism. Passive damping can be achieved by applying layers of a polymeric material, called viscoelastic layers (VEM), to the base structure. This type of configuration is known as free or unconstrained layer damping treatment. A shear or constrained damping treatment uses the idea of adding a constraining layer, typically a metal, on top of the polymeric layer. Constrained treatment is a more efficient form of damping than the unconstrained damping treatment. In constrained damping treatment a sandwich is formed with the viscoelastic layer as the core. When the two outer layers experience bending, as they would if the structure was oscillating, they shear the viscoelastic layer and energy is dissipated in the form of heat. This form of energy dissipation allows the structural oscillations to attenuate much faster. The purpose behind this study is to predict damping effects by using two methods of passive viscoelastic constrained layer damping. First method is Euler-Bernoulli beam theory; it is commonly used for predicting the vibratory response of beams. Second method is Finite Element software packages provided in this research were obtained by using two-dimensional solid structural elements in ANSYS14 specifically eight nodded (SOLID183) and the output results from ANSYS 14 (SOLID183) its damped natural frequency values and mode shape for first five modes. This method of passive damping treatment is widely used for structural application in many industries like aerospace, automobile, etc. In this paper, take a steel cantilever sandwich beam with viscoelastic core type 3M-468 by using methods of passive viscoelastic constrained layer damping. Also can proved that, the percentage reduction of modal frequency between undamped and damped steel sandwich cantilever beam 8mm thickness for each mode is very high, this is due to the effect of viscoelastic layer on damped beams. Finally this types of damped sandwich steel cantilever beam with viscoelastic materials core type (3M468) is very appropriate to use in automotive industry and in many mechanical application, because has very high capability to reduce the modal vibration of structures.Keywords: steel cantilever, sandwich beam, viscoelastic materials core type (3M468), ANSYS14, Euler-Bernoulli beam theory
Procedia PDF Downloads 3201047 RNA-Seq Analysis of the Wild Barley (H. spontaneum) Leaf Transcriptome under Salt Stress
Authors: Ahmed Bahieldin, Ahmed Atef, Jamal S. M. Sabir, Nour O. Gadalla, Sherif Edris, Ahmed M. Alzohairy, Nezar A. Radhwan, Mohammed N. Baeshen, Ahmed M. Ramadan, Hala F. Eissa, Sabah M. Hassan, Nabih A. Baeshen, Osama Abuzinadah, Magdy A. Al-Kordy, Fotouh M. El-Domyati, Robert K. Jansen
Abstract:
Wild salt-tolerant barley (Hordeum spontaneum) is the ancestor of cultivated barley (Hordeum vulgare or H. vulgare). Although the cultivated barley genome is well studied, little is known about genome structure and function of its wild ancestor. In the present study, RNA-Seq analysis was performed on young leaves of wild barley treated with salt (500 mM NaCl) at four different time intervals. Transcriptome sequencing yielded 103 to 115 million reads for all replicates of each treatment, corresponding to over 10 billion nucleotides per sample. Of the total reads, between 74.8 and 80.3% could be mapped and 77.4 to 81.7% of the transcripts were found in the H. vulgare unigene database (unigene-mapped). The unmapped wild barley reads for all treatments and replicates were assembled de novo and the resulting contigs were used as a new reference genome. This resultedin94.3 to 95.3%oftheunmapped reads mapping to the new reference. The number of differentially expressed transcripts was 9277, 3861 of which were uni gene-mapped. The annotated unigene- and de novo-mapped transcripts (5100) were utilized to generate expression clusters across time of salt stress treatment. Two-dimensional hierarchical clustering classified differential expression profiles into nine expression clusters, four of which were selected for further analysis. Differentially expressed transcripts were assigned to the main functional categories. The most important groups were ‘response to external stimulus’ and ‘electron-carrier activity’. Highly expressed transcripts are involved in several biological processes, including electron transport and exchanger mechanisms, flavonoid biosynthesis, reactive oxygen species (ROS) scavenging, ethylene production, signaling network and protein refolding. The comparisons demonstrated that mRNA-Seq is an efficient method for the analysis of differentially expressed genes and biological processes under salt stress.Keywords: electron transport, flavonoid biosynthesis, reactive oxygen species, rnaseq
Procedia PDF Downloads 3931046 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily
Authors: Siming Xie
Abstract:
In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).Keywords: homophily, multidimension, social networks, friendships
Procedia PDF Downloads 1721045 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method
Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong
Abstract:
Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure
Procedia PDF Downloads 2411044 Optimal Seismic Design of Reinforced Concrete Shear Wall-Frame Structure
Authors: H. Nikzad, S. Yoshitomi
Abstract:
In this paper, the optimal seismic design of reinforced concrete shear wall-frame building structures was done using structural optimization. The optimal section sizes were generated through structural optimization based on linear static analysis conforming to American Concrete Institute building design code (ACI 318-14). An analytical procedure was followed to validate the accuracy of the proposed method by comparing stresses on structural members through output files of MATLAB and ETABS. In order to consider the difference of stresses in structural elements by ETABS and MATLAB, and to avoid over-stress members by ETABS, a stress constraint ratio of MATLAB to ETABS was modified and introduced for the most critical load combinations and structural members. Moreover, seismic design of the structure was done following the International Building Code (IBC 2012), American Concrete Institute Building Code (ACI 318-14) and American Society of Civil Engineering (ASCE 7-10) standards. Typical reinforcement requirements for the structural wall, beam and column were discussed and presented using ETABS structural analysis software. The placement and detailing of reinforcement of structural members were also explained and discussed. The outcomes of this study show that the modification of section sizes play a vital role in finding an optimal combination of practical section sizes. In contrast, the optimization problem with size constraints has a higher cost than that of without size constraints. Moreover, the comparison of optimization problem with that of ETABS program shown to be satisfactory and governed ACI 318-14 building design code criteria.Keywords: structural optimization, seismic design, linear static analysis, etabs, matlab, rc shear wall-frame structures
Procedia PDF Downloads 1731043 Detection of Abnormal Process Behavior in Copper Solvent Extraction by Principal Component Analysis
Authors: Kirill Filianin, Satu-Pia Reinikainen, Tuomo Sainio
Abstract:
Frequent measurements of product steam quality create a data overload that becomes more and more difficult to handle. In the current study, plant history data with multiple variables was successfully treated by principal component analysis to detect abnormal process behavior, particularly, in copper solvent extraction. The multivariate model is based on the concentration levels of main process metals recorded by the industrial on-stream x-ray fluorescence analyzer. After mean-centering and normalization of concentration data set, two-dimensional multivariate model under principal component analysis algorithm was constructed. Normal operating conditions were defined through control limits that were assigned to squared score values on x-axis and to residual values on y-axis. 80 percent of the data set were taken as the training set and the multivariate model was tested with the remaining 20 percent of data. Model testing showed successful application of control limits to detect abnormal behavior of copper solvent extraction process as early warnings. Compared to the conventional techniques of analyzing one variable at a time, the proposed model allows to detect on-line a process failure using information from all process variables simultaneously. Complex industrial equipment combined with advanced mathematical tools may be used for on-line monitoring both of process streams’ composition and final product quality. Defining normal operating conditions of the process supports reliable decision making in a process control room. Thus, industrial x-ray fluorescence analyzers equipped with integrated data processing toolbox allows more flexibility in copper plant operation. The additional multivariate process control and monitoring procedures are recommended to apply separately for the major components and for the impurities. Principal component analysis may be utilized not only in control of major elements’ content in process streams, but also for continuous monitoring of plant feed. The proposed approach has a potential in on-line instrumentation providing fast, robust and cheap application with automation abilities.Keywords: abnormal process behavior, failure detection, principal component analysis, solvent extraction
Procedia PDF Downloads 3101042 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 5081041 Enhancing Wire Electric Discharge Machining Efficiency through ANOVA-Based Process Optimization
Authors: Rahul R. Gurpude, Pallvita Yadav, Amrut Mulay
Abstract:
In recent years, there has been a growing focus on advanced manufacturing processes, and one such emerging process is wire electric discharge machining (WEDM). WEDM is a precision machining process specifically designed for cutting electrically conductive materials with exceptional accuracy. It achieves material removal from the workpiece metal through spark erosion facilitated by electricity. Initially developed as a method for precision machining of hard materials, WEDM has witnessed significant advancements in recent times, with numerous studies and techniques based on electrical discharge phenomena being proposed. These research efforts and methods in the field of ED encompass a wide range of applications, including mirror-like finish machining, surface modification of mold dies, machining of insulating materials, and manufacturing of micro products. WEDM has particularly found extensive usage in the high-precision machining of complex workpieces that possess varying hardness and intricate shapes. During the cutting process, a wire with a diameter ranging from 0.18mm is employed. The evaluation of EDM performance typically revolves around two critical factors: material removal rate (MRR) and surface roughness (SR). To comprehensively assess the impact of machining parameters on the quality characteristics of EDM, an Analysis of Variance (ANOVA) was conducted. This statistical analysis aimed to determine the significance of various machining parameters and their relative contributions in controlling the response of the EDM process. By undertaking this analysis, optimal levels of machining parameters were identified to achieve desirable material removal rates and surface roughness.Keywords: WEDM, MRR, optimization, surface roughness
Procedia PDF Downloads 761040 Research on the Renewal and Utilization of Space under the Bridge in Chongqing Based on Spatial Potential Evaluation
Authors: Xvelian Qin
Abstract:
Urban "organic renewal" based on the development of existing resources in high-density urban areas has become the mainstream of urban development in the new era. As an important stock resource of public space in high-density urban areas, promoting its value remodeling is an effective way to alleviate the shortage of public space resources. However, due to the lack of evaluation links in the process of underpass space renewal, a large number of underpass space resources have been left idle, facing the problems of low space conversion efficiency, lack of accuracy in development decision-making, and low adaptability of functional positioning to citizens' needs. Therefore, it is of great practical significance to construct the evaluation system of under-bridge space renewal potential and explore the renewal mode. In this paper, some of the under-bridge spaces in the main urban area of Chongqing are selected as the research object. Through the questionnaire interviews with the users of the built excellent space under the bridge, three types of six levels and twenty-two potential evaluation indexes of "objective demand factor, construction feasibility factor and construction suitability factor" are selected, including six levels of land resources, infrastructure, accessibility, safety, space quality and ecological environment. The analytical hierarchy process and expert scoring method are used to determine the index weight, construct the potential evaluation system of the space under the bridge in high-density urban areas of Chongqing, and explore the direction of renewal and utilization of its suitability. To provide feasible theoretical basis and scientific decision support for the use of under bridge space in the future.Keywords: high density urban area, potential evaluation, space under bridge, updated using
Procedia PDF Downloads 711039 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane
Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo
Abstract:
Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining
Procedia PDF Downloads 881038 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance
Procedia PDF Downloads 3951037 An Investigation into Slow ESL Reading Speed in Pakistani Students
Authors: Hina Javed
Abstract:
This study investigated the different strategies used by Pakistani students learning English as a second language at secondary level school. The basic premise of the study is that ESL students face tremendous difficulty while they are reading a text in English. It also purports to dig into the different causes of their slow reading. They might range from word reading accuracy, mental translation, lexical density, cultural gaps, complex syntactic constructions, and back skipping. Sixty Grade 7 students from two secondary mainstream schools in Lahore were selected for the study, thirty being boys and thirty girls. They were administered reading-related and reading speed pre and post-tests. The purpose of the tests was to gauge their performance on different reading tasks so as to be able to see how they used strategies, if any, and also to ascertain the causes hampering their performance on those tests. In the pretests, they were given simple texts with considerable lexical density and moderately complex sentential layout. In the post-tests, the reading tasks contained comic strips, texts with visuals, texts with controlled vocabulary, and an evenly distributed varied range of simple, compound, and complex sentences. Both the tests were timed. The results gleaned through the data gathered corroborated the researchers’ basic hunch that they performed significantly better than pretests. The findings suggest that the morphological structure of words and lexical density are the main sources of reading comprehension difficulties in poor ESL readers. It is also confirmed that if the texts are accompanied by pictorial visuals, it greatly facilitates students’ reading speed and comprehension. There is no substantial evidence that ESL readers adopt any specific strategy while reading in English.Keywords: slow ESL reading speed, mental translation, complex syntactic constructions, back skipping
Procedia PDF Downloads 731036 Biomechanics of Ceramic on Ceramic vs. Ceramic on Xlpe Total Hip Arthroplasties During Gait
Authors: Athanasios Triantafyllou, Georgios Papagiannis, Vassilios Nikolaou, Panayiotis J. Papagelopoulos, George C. Babis
Abstract:
In vitro measurements are widely used in order to predict THAs wear rate implementing gait kinematic and kinetic parameters. Clinical tests of materials and designs are crucial to prove the accuracy and validate such measurements. The purpose of this study is to examine the affection of THA gait kinematics and kinetics on wear during gait, the essential functional activity of humans, by comparing in vivo gait data to in vitro results. Our study hypothesis is that both implants will present the same hip joint kinematics and kinetics during gait. 127 unilateral primary cementless total hip arthroplasties were included in the research. Independent t-tests were used to identify a statistically significant difference in kinetic and kinematic data extracted from 3D gait analysis. No statistically significant differences observed at mean peak abduction, flexion and extension moments between the two groups (P.abduction= 0,125, P.flexion= 0,218, P.extension= 0,082). The kinematic measurements show no statistically significant differences too (Prom flexion-extension= 0,687, Prom abduction-adduction= 0,679). THA kinematics and kinetics during gait are important biomechanical parameters directly associated with implants wear. In vitro studies report less wear in CoC than CoXLPE when tested with the same gait cycle kinematic protocol. Our findings confirm that both implants behave identically in terms of kinematics in the clinical environment, thus strengthening in vitro results of CoC advantage. Correlated to all other significant factors that affect THA wear could address in a complete prism the wear on CoC and CoXLPE.Keywords: total hip arthroplasty biomechanics, THA gait analysis, ceramic on ceramic kinematics, ceramic on XLPE kinetics, total hip replacement wear
Procedia PDF Downloads 156