Search results for: dynamic compression tests
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8533

Search results for: dynamic compression tests

5953 Evaluating Impact of Teacher Professional Development Program on Students’ Learning

Authors: S. C. Lin, W. W. Cheng, M. S. Wu

Abstract:

This study attempted to investigate the connection between teacher professional development program and students’ Learning. This study took Readers’ Theater Teaching Program (RTTP) for professional development as an example to inquiry how participants apply their new knowledge and skills learned from RTTP to their teaching practice and how the impact influence students learning. The goals of the RTTP included: 1) to enhance teachers RT content knowledge; 2) to implement RT instruction in teachers’ classrooms in response to their professional development. 2) to improve students’ ability of reading fluency in professional development teachers’ classrooms. This study was a two-year project. The researchers applied mixed methods to conduct this study including qualitative inquiry and one-group pretest-posttest experimental design. In the first year, this study focused on designing and implementing RTTP and evaluating participants’ satisfaction of RTTP, what they learned and how they applied it to design their English reading curriculum. In the second year, the study adopted quasi-experimental design approach and evaluated how participants RT instruction influenced their students’ learning, including English knowledge, skill, and attitudes. The participants in this study composed two junior high school English teachers and their students. Data were collected from a number of different sources including teaching observation, semi-structured interviews, teaching diary, teachers’ professional development portfolio, Pre/post RT content knowledge tests, teacher survey, and students’ reading fluency tests. To analyze the data, both qualitative and quantitative data analysis were used. Qualitative data analysis included three stages: organizing data, coding data, and analyzing and interpreting data. Quantitative data analysis included descriptive analysis. The results indicated that average percentage of correct on pre-tests in RT content knowledge assessment was 40.75% with two teachers ranging in prior knowledge from 35% to 46% in specific RT content. Post-test RT content scores ranged from 70% to 82% correct with an average score of 76.50%. That gives teachers an average gain of 35.75% in overall content knowledge as measured by these pre/post exams. Teachers’ pre-test scores were lowest in script writing and highest in performing. Script writing was also the content area that showed the highest gains in content knowledge. Moreover, participants hold a positive attitude toward RTTP. They recommended that the approach of professional learning community, which was applied in RTTP was benefit to their professional development. Participants also applied the new skills and knowledge which they learned from RTTP to their practices. The evidences from this study indicated that RT English instruction significantly influenced students’ reading fluency and classroom climate. The result indicated that all of the experimental group students had a big progress in reading fluency after RT instruction. The study also found out several obstacles. Suggestions were also made.

Keywords: teacher’s professional development, program evaluation, readers’ theater, english reading instruction, english reading fluency

Procedia PDF Downloads 373
5952 Τhe Importance of Previous Examination Results, in Futural Differential Diagnostic Procedures and Especially in the Era of Covid-19

Authors: Angelis P. Barlampas

Abstract:

Purpose or Learning Objective It is well known that previous examinations play a major role in futural diagnosis, thus avoiding unnecessary new exams that cost in time and money both for the patient and the health system. A case is presented in which past patient’s results, in combination with the least needed new tests, give an easy final diagnosis. Methods or Background A middle aged man visited the emergency department complaining of hard controlled, persisting fever for the last few days. Laboratory tests showed an elevated number of white blood cells with neutrophil shift and abnormal CRP. The patient was admitted to hospital a month ago for continuing lungs symptomatology after a recent covid-19 infection. Results or Findings Computed tomography scanning showed a solid mass with spiculating margins in right lower lobe. After intravenous iodine contrast administration, there was mildly peripheral enhancement and eccentric non enhancing area. A pneumonic cancer was suspected. Comparison with the patient’s latest computed tomography revealed no mass in the area of interest but only signs of recent post covid-19 lung parenchyma abnormalities. Any new mass that appears in a month’s time span can not be a cancer but a benign lesion. It was obvious that an abscess was the most suitable explanation. The patient was admitted to hospital, and antibiotic therapy was given, with very good results. After a few days, the patient was afebrile and in good condition. Conclusion In this case , a PET scan or a biopsy was avoided, thanks to the patient’s medical history and the availability of previous examinations. It is worthy encouraging the patients to keep their medical records and organizing more efficiently the health system with the current technology of archiving the medical examinations, too.

Keywords: covid-19, chest ct, cancer, abscess, fever

Procedia PDF Downloads 46
5951 Effects of Moisture on Fatigue Behavior of Asphalt Concrete Mixtures Using Four-Point Bending Test

Authors: Mohit Chauhan, Atul Narayan

Abstract:

Moisture damage is the continuous deterioration of asphalt concrete mixtures by the loss of adhesive bond between the asphalt binder and aggregates, or loss of cohesive bonds within the asphalt binder in the presence of moisture. Moisture has been known to either cause or exacerbates distresses in asphalt concrete pavements. Since moisture would often retain for a relatively long duration at the bottom of asphalt concrete layer, the movement of traffic loading in this saturated condition would cause excess stresses or strains within the mixture. This would accelerate the degradation of the adhesion and cohesion within the mixture and likely to contribute the development of fatigue cracking in asphalt concrete pavements. In view of this, it is important to investigate the effect of moisture on the fatigue behavior of asphalt concrete mixtures. In this study, changes in fatigue characteristics after moisture conditioning were evaluated by conducting four-point beam fatigue tests on dry and moisture conditioned specimens. For this purpose, mixtures with two different types of binders were prepared and saturated with moisture using 700 mm Hg vacuum. Beam specimens, in this way, were taken to a saturation level of 65-75 percent. After preconditioning specimens in this degree of saturation and 60°C for a period of 24 hours, they were subjected to four point beam fatigue tests in strain-controlled mode with a strain amplitude of 400 microstrain. The results were then compared with the fatigue test results obtained with beam specimens that were not subjected to moisture conditioning. Test results show that the conditioning reduces both fatigue life and initial flexural stiffness of specimen significantly. The moisture conditioning was also found to increase the rate of reduction of flexural stiffness. Moreover, it was observed that the fatigue life ratio (FLR), the ratio of the fatigue life of the moisture conditioned sample to that of the dry sample, is significantly lower than the flexural stiffness ratio (FSR). The study indicates that four-point bending test is an appropriate tool with FLR and FSR as the potential parameters for moisture-sensitivity evaluation.

Keywords: asphalt concrete, fatigue cracking, moisture damage, preconditioning

Procedia PDF Downloads 126
5950 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 151
5949 Socio-Demographic Factors and Testing Practices Are Associated with Spatial Patterns of Clostridium difficile Infection in the Australian Capital Territory, 2004-2014

Authors: Aparna Lal, Ashwin Swaminathan, Teisa Holani

Abstract:

Background: Clostridium difficile infections (CDIs) have been on the rise globally. In Australia, rates of CDI in all States and Territories have increased significantly since mid-2011. Identifying risk factors for CDI in the community can help inform targeted interventions to reduce infection. Methods: We examine the role of neighbourhood socio-economic status, demography, testing practices and the number of residential aged care facilities on spatial patterns in CDI incidence in the Australian Capital Territory. Data on all tests conducted for CDI were obtained from ACT Pathology by postcode for the period 1st January 2004 through 31 December 2014. Distribution of age groups and the neighbourhood Index of Relative Socio-economic Advantage Disadvantage (IRSAD) were obtained from the Australian Bureau of Statistics 2011 National Census data. A Bayesian spatial conditional autoregressive model was fitted at the postcode level to quantify the relationship between CDI and socio-demographic factors. To identify CDI hotspots, exceedance probabilities were set at a threshold of twice the estimated relative risk. Results: CDI showed a positive spatial association with the number of tests (RR=1.01, 95% CI 1.00, 1.02) and the resident population over 65 years (RR=1.00, 95% CI 1.00, 1.01). The standardized index of relative socio-economic advantage disadvantage (IRSAD) was significantly negatively associated with CDI (RR=0.74, 95% CI 0.56, 0.94). We identified three postcodes with high probability (0.8-1.0) of excess risk. Conclusions: Here, we demonstrate geographic variations in CDI in the ACT with a positive association of CDI with socioeconomic disadvantage and identify areas with a high probability of elevated risk compared with surrounding communities. These findings highlight community-based risk factors for CDI.

Keywords: spatial, socio-demographic, infection, Clostridium difficile

Procedia PDF Downloads 300
5948 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions

Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali

Abstract:

The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.

Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor

Procedia PDF Downloads 118
5947 Authentic and Transformational Leadership Model of the Directors of Tambon Health Promoting Hospitals Effecting to the Effectiveness of Southern Tambon Health Promoting Hospitals: The Interaction and Invariance Tests of Gender Factor

Authors: Suphap Sikkhaphan, Muwanga Zake, Johnnie Wycliffe Frank

Abstract:

The purposes of the study included a) investigating the authentic and transformational leadership model of the directors of tambon health promoting hospitals b) evaluating the relation between the authentic and transformation leadership of the directors of tambon health promoting hospitals and the effectiveness of their hospitals and c) assessing the invariance test of the authentic and transformation leadership of the directors of tambon health promoting hospitals. All 400 southern tambon health promoting hospital directors were enrolled into the study. Half were males (200), and another half were females (200). They were sampled via a stratified method. A research tool was a questionnaire paper containing 4 different sections. The Alpha-Cronbach’s Coefficient was equally to .98. Descriptive analysis was used for demographic data, and inferential statistics was used for the relation and invariance tests of authentic and transformational leadership of the directors of tambon health promoting hospitals. The findings revealed overall the authentic and transformation leadership model of the directors of tambon health promoting hospitals has the relation to the effectiveness of the hospitals. Only the factor of “strong community support” was statistically significantly related to the authentic leadership (p < .05). However, there were four latent variables statistically related to the transformational leadership including, competency and work climate, management system, network cooperation, and strong community support (p = .01). Regarding the relation between the authentic and transformation leadership of the directors of tambon health promoting hospitals and the effectiveness of their hospitals, four casual variables of authentic leadership were not related to those latent variables. In contrast, all four latent variables of transformational leadership has statistically significantly related to the effectiveness of tambon health promoting hospitals (p = .001). Furthermore, only management system variable was significantly related to those casual variables of the authentic leadership (p < .05). Regarding the invariance test, the result found no statistical significance of the authentic and transformational leadership model of the directors of tambon health promoting hospitals, especially between male and female genders (p > .05).

Keywords: authentic leadership, transformational leadership, tambon health promoting hospital

Procedia PDF Downloads 419
5946 Experimental and Analytical Study to Investigate the Effect of Tension Reinforcement on Behavior of Reinforced Concrete Short Beams

Authors: Hakan Ozturk, Aydin Demir, Kemal Edip, Marta Stojmanovska, Julijana Bojadjieva

Abstract:

There are many factors that affect the behavior of reinforced concrete beams. These can be listed as concrete compressive and reinforcement yield strength, amount of tension, compression and confinement bars, and strain hardening of reinforcement. In the study, support condition of short beams is selected statically indeterminate to first degree. Experimental and numerical analysis are carried for reinforcement concrete (RC) short beams. Dimensions of cross sections are selected as 250mm width and 500 mm height. The length of RC short beams is designed as 2250 mm and these values are constant in all beams. After verifying accurately finite element model, a numerical parametric study is performed with varied diameter of tension reinforcement. Effect of change in diameter is investigated on behavior of RC short beams. As a result of the study, ductility ratios and failure modes are determined, and load-displacement graphs are obtained in order to understand the behavior of short beams. It is deduced that diameter of tension reinforcement plays very important role on the behavior of RC short beams in terms of ductility and brittleness.

Keywords: short beam, reinforced concrete, finite element analysis, longitudinal reinforcement

Procedia PDF Downloads 190
5945 The Impact of the Covid-19 Crisis on the Information Behavior in the B2B Buying Process

Authors: Stehr Melanie

Abstract:

The availability of apposite information is essential for the decision-making process of organizational buyers. Due to the constraints of the Covid-19 crisis, information channels that emphasize face-to-face contact (e.g. sales visits, trade shows) have been unavailable, and usage of digitally-driven information channels (e.g. videoconferencing, platforms) has skyrocketed. This paper explores the question in which areas the pandemic induced shift in the use of information channels could be sustainable and in which areas it is a temporary phenomenon. While information and buying behavior in B2C purchases has been regularly studied in the last decade, the last fundamental model of organizational buying behavior in B2B was introduced by Johnston and Lewin (1996) in times before the advent of the internet. Subsequently, research efforts in B2B marketing shifted from organizational buyers and their decision and information behavior to the business relationships between sellers and buyers. This study builds on the extensive literature on situational factors influencing organizational buying and information behavior and uses the economics of information theory as a theoretical framework. The research focuses on the German woodworking industry, which before the Covid-19 crisis was characterized by a rather low level of digitization of information channels. By focusing on an industry with traditional communication structures, a shift in information behavior induced by an exogenous shock is considered a ripe research setting. The study is exploratory in nature. The primary data source is 40 in-depth interviews based on the repertory-grid method. Thus, 120 typical buying situations in the woodworking industry and the information and channels relevant to them are identified. The results are combined into clusters, each of which shows similar information behavior in the procurement process. In the next step, the clusters are analyzed in terms of the post and pre-Covid-19 crisis’ behavior identifying stable and dynamic information behavior aspects. Initial results show that, for example, clusters representing search goods with low risk and complexity suggest a sustainable rise in the use of digitally-driven information channels. However, in clusters containing trust goods with high significance and novelty, an increased return to face-to-face information channels can be expected after the Covid-19 crisis. The results are interesting from both a scientific and a practical point of view. This study is one of the first to apply the economics of information theory to organizational buyers and their decision and information behavior in the digital information age. Especially the focus on the dynamic aspects of information behavior after an exogenous shock might contribute new impulses to theoretical debates related to the economics of information theory. For practitioners - especially suppliers’ marketing managers and intermediaries such as publishers or trade show organizers from the woodworking industry - the study shows wide-ranging starting points for a future-oriented segmentation of their marketing program by highlighting the dynamic and stable preferences of elaborated clusters in the choice of their information channels.

Keywords: B2B buying process, crisis, economics of information theory, information channel

Procedia PDF Downloads 171
5944 Exploring the Interplay of Attention, Awareness, and Control: A Comprehensive Investigation

Authors: Venkateswar Pujari

Abstract:

This study tries to investigate the complex interplay between control, awareness, and attention in human cognitive processes. The fundamental elements of cognitive functioning that play a significant role in influencing perception, decision-making, and behavior are attention, awareness, and control. Understanding how they interact can help us better understand how our minds work and may even increase our understanding of cognitive science and its therapeutic applications. The study uses an empirical methodology to examine the relationships between attention, awareness, and control by integrating different experimental paradigms and neuropsychological tests. To ensure the generalizability of findings, a wide sample of participants is chosen, including people with various cognitive profiles and ages. The study is structured into four primary parts, each of which focuses on one component of how attention, awareness, and control interact: 1. Evaluation of Attentional Capacity and Selectivity: In this stage, participants complete established attention tests, including the Stroop task and visual search tasks. 2. Evaluation of Awareness Degrees: In the second stage, participants' degrees of conscious and unconscious awareness are assessed using perceptual awareness tasks such as masked priming and binocular rivalry tasks. 3. Investigation of Cognitive Control Mechanisms: In the third phase, reaction inhibition, cognitive flexibility, and working memory capacity are investigated using exercises like the Wisconsin Card Sorting Test and the Go/No-Go paradigm. 4. Results Integration and Analysis: Data from all phases are integrated and analyzed in the final phase. To investigate potential links and prediction correlations between attention, awareness, and control, correlational and regression analyses are carried out. The study's conclusions shed light on the intricate relationships that exist between control, awareness, and attention throughout cognitive function. The findings may have consequences for cognitive psychology, neuroscience, and clinical psychology by providing new understandings of cognitive dysfunctions linked to deficiencies in attention, awareness, and control systems.

Keywords: attention, awareness, control, cognitive functioning, neuropsychological assessment

Procedia PDF Downloads 74
5943 Complexity in a Leslie-Gower Delayed Prey-Predator Model

Authors: Anuraj Singh

Abstract:

The complex dynamics is explored in a prey predator system with multiple delays. The predator dynamics is governed by Leslie-Gower scheme. The existence of periodic solutions via Hopf bifurcation with respect to delay parameters is established. To substantiate analytical findings, numerical simulations are performed. The system shows rich dynamic behavior including chaos and limit cycles.

Keywords: chaos, Hopf bifurcation, stability, time delay

Procedia PDF Downloads 309
5942 Influence of Flexible Plate's Contour on Dynamic Behavior of High Speed Flexible Coupling of Combat Aircraft

Authors: Dineshsingh Thakur, S. Nagesh, J. Basha

Abstract:

A lightweight High Speed Flexible Coupling (HSFC) is used to connect the Engine Gear Box (EGB) with an Accessory Gear Box (AGB) of the combat aircraft. The HSFC transmits the power at high speeds ranging from 10000 to 18000 rpm from the EGB to AGB. The HSFC is also accommodates larger misalignments resulting from thermal expansion of the aircraft engine and mounting arrangement. The HSFC has the series of metallic contoured annular thin cross-sectioned flexible plates to accommodate the misalignments. The flexible plates are accommodating the misalignment by the elastic material flexure. As the HSFC operates at higher speed, the flexural and axial resonance frequencies are to be kept away from the operating speed and proper prediction is required to prevent failure in the transmission line of a single engine fighter aircraft. To study the influence of flexible plate’s contour on the lateral critical speed (LCS) of HSFC, a mathematical model of HSFC as a elven rotor system is developed. The flexible plate being the bending member of the system, its bending stiffness which results from the contoured governs the LCS. Using transfer matrix method, Influence of various flexible plate contours on critical speed is analyzed. In the above analysis, the support bearing flexibility on critical speed prediction is also considered. Based on the study, a model is built with the optimum contour of flexible plate, for validation by experimental modal analysis. A good correlation between the theoretical prediction and model behavior is observed. From the study, it is found that the flexible plate’s contour is playing vital role in modification of system’s dynamic behavior and the present model can be extended for the development of similar type of flexible couplings for its computational simplicity and reliability.

Keywords: flexible rotor, critical speed, experimental modal analysis, high speed flexible coupling (HSFC), misalignment

Procedia PDF Downloads 195
5941 The Effect of Electromagnetic Stirring during Solidification of Nickel Based Alloys

Authors: Ricardo Paiva, Rui Soares, Felix Harnau, Bruno Fragoso

Abstract:

Nickel-based alloys are materials well suited for service in extreme environments subjected to pressure and heat. Some industrial applications for Nickel-based alloys are aerospace and jet engines, oil and gas extraction, pollution control and waste processing, automotive and marine industry. It is generally recognized that grain refinement is an effective methodology to improve the quality of casted parts. Conventional grain refinement techniques involve the addition of inoculation substances, the control of solidification conditions, or thermomechanical treatment with recrystallization. However, such methods often lead to non-uniform grain size distribution and the formation of hard phases, which are detrimental to both wear performance and biocompatibility. Stirring of the melt by electromagnetic fields has been widely used in continuous castings with success for grain refinement, solute redistribution, and surface quality improvement. Despite the advantages, much attention has not been paid yet to the use of this approach on functional castings such as investment casting. Furthermore, the effect of electromagnetic stirring (EMS) fields on Nickel-based alloys is not known. In line with the gaps/needs of the state-of-art, the present research work targets to promote new advances in controlling grain size and morphology of investment cast Nickel based alloys. For such a purpose, a set of experimental tests was conducted. A high-frequency induction furnace with vacuum and controlled atmosphere was used to cast the Inconel 718 alloy in ceramic shells. A coil surrounded the casting chamber in order to induce electromagnetic stirring during solidification. Aiming to assess the effect of the electromagnetic stirring on Ni alloys, the samples were subjected to microstructural analysis and mechanical tests. The results show that electromagnetic stirring can be an effective methodology to modify the grain size and mechanical properties of investment-cast parts.

Keywords: investment casting, grain refinement, electromagnetic stirring, nickel alloys

Procedia PDF Downloads 118
5940 Construction of a Dynamic Migration Model of Extracellular Fluid in Brain for Future Integrated Control of Brain State

Authors: Tomohiko Utsuki, Kyoka Sato

Abstract:

In emergency medicine, it is recognized that brain resuscitation is very important for the reduction of mortality rate and neurological sequelae. Especially, the control of brain temperature (BT), intracranial pressure (ICP), and cerebral blood flow (CBF) are most required for stabilizing brain’s physiological state in the treatment for such as brain injury, stroke, and encephalopathy. However, the manual control of BT, ICP, and CBF frequently requires the decision and operation of medical staff, relevant to medication and the setting of therapeutic apparatus. Thus, the integration and the automation of the control of those is very effective for not only improving therapeutic effect but also reducing staff burden and medical cost. For realizing such integration and automation, a mathematical model of brain physiological state is necessary as the controlled object in simulations, because the performance test of a prototype of the control system using patients is not ethically allowed. A model of cerebral blood circulation has already been constructed, which is the most basic part of brain physiological state. Also, a migration model of extracellular fluid in brain has been constructed, however the condition that the total volume of intracranial cavity is almost changeless due to the hardness of cranial bone has not been considered in that model. Therefore, in this research, the dynamic migration model of extracellular fluid in brain was constructed on the consideration of the changelessness of intracranial cavity’s total volume. This model is connectable to the cerebral blood circulation model. The constructed model consists of fourteen compartments, twelve of which corresponds to perfused area of bilateral anterior, middle and posterior cerebral arteries, the others corresponds to cerebral ventricles and subarachnoid space. This model enable to calculate the migration of tissue fluid from capillaries to gray matter and white matter, the flow of tissue fluid between compartments, the production and absorption of cerebrospinal fluid at choroid plexus and arachnoid granulation, and the production of metabolic water. Further, the volume, the colloid concentration, and the tissue pressure of/in each compartment are also calculable by solving 40-dimensional non-linear simultaneous differential equations. In this research, the obtained model was analyzed for its validation under the four condition of a normal adult, an adult with higher cerebral capillary pressure, an adult with lower cerebral capillary pressure, and an adult with lower colloid concentration in cerebral capillary. In the result, calculated fluid flow, tissue volume, colloid concentration, and tissue pressure were all converged to suitable value for the set condition within 60 minutes at a maximum. Also, because these results were not conflict with prior knowledge, it is certain that the model can enough represent physiological state of brain under such limited conditions at least. One of next challenges is to integrate this model and the already constructed cerebral blood circulation model. This modification enable to simulate CBF and ICP more precisely due to calculating the effect of blood pressure change to extracellular fluid migration and that of ICP change to CBF.

Keywords: dynamic model, cerebral extracellular migration, brain resuscitation, automatic control

Procedia PDF Downloads 136
5939 The Influence of Bentonite on the Rheology of Geothermal Grouts

Authors: A. N. Ghafar, O. A. Chaudhari, W. Oettel, P. Fontana

Abstract:

This study is a part of the EU project GEOCOND-Advanced materials and processes to improve performance and cost-efficiency of shallow geothermal systems and underground thermal storage. In heat exchange boreholes, to improve the heat transfer between the pipes and the surrounding ground, the space between the pipes and the borehole wall is normally filled with geothermal grout. Traditionally, bentonite has been a crucial component in most commercially available geothermal grouts to assure the required stability and impermeability. The investigations conducted in the early stage of this project during the benchmarking tests on some commercial grouts showed considerable sensitivity of the rheological properties of the tested grouts to the mixing parameters, i.e., mixing time and velocity. Further studies on this matter showed that bentonite, which has been one of the important constituents in most grout mixes, was probably responsible for such behavior. Apparently, proper amount of shear should be applied during the mixing process to sufficiently activate the bentonite. The higher the amount of applied shear the more the activation of bentonite, resulting in change in the grout rheology. This explains why, occasionally in the field applications, the flow properties of the commercially available geothermal grouts using different mixing conditions (mixer type, mixing time, mixing velocity) are completely different than expected. A series of tests were conducted on the grout mixes, with and without bentonite, using different mixing protocols. The aim was to eliminate/reduce the sensitivity of the rheological properties of the geothermal grouts to the mixing parameters by replacing bentonite with polymeric (non-clay) stabilizers. The results showed that by replacing bentonite with a proper polymeric stabilizer, the sensitivity of the grout mix on mixing time and velocity was to a great extent diminished. This can be considered as an alternative for the developers/producers of geothermal grouts to provide enhanced materials with less uncertainty in obtained results in the field applications.

Keywords: flow properties, geothermal grout, mixing time, mixing velocity, rheological properties

Procedia PDF Downloads 110
5938 Iranian Sexual Health Needs in Viewpoint of Policy Makers: A Qualitative Study

Authors: Mahnaz Motamedi, Mohammad Shahbazi, Shahrzad Rahimi-Naghani, Mehrdad Salehi

Abstract:

Introduction: Identifying sexual health needs, developing appropriate plans, and delivering services to meet those needs is an essential component of health programs for women, men, and children all over the world, especially in poor countries. Main Subject: The aim of this study was to describe the needs of sexual health from the viewpoint of health policymakers in Iran. Methods: A qualitative study using thematic content analysis was designed and conducted. Data gathering was conducted through semi-structured, in-depth interviews with 25 key informants within the healthcare system. Key informants were selected through both purposive and snowball sampling. MAXQUDA software (version 10) was used to facilitate transcription, classification of codes, and conversion of data into meaningful units, by the process of reduction and compression. Results: The analysis of narratives and information categorized sexual health needs into five categories: culturalization of sexual health discourse, sexual health care services, sexual health educational needs, sexual health research needs, and organizational needs. Conclusion: Identifying and explaining sexual health needs is an important factor in determining the priority of sexual health programs and identification of barriers to meet these needs. This can help other policymakers and health planners to develop appropriate programs to promote sexual and reproductive health.

Keywords: sexual health, sexual health needs, policy makers, health system, qualitative study

Procedia PDF Downloads 202
5937 Designing Function Knitted and Woven Upholstery Textile With SCOPY Film

Authors: Manar Y. Abd El-Aziz, Alyaa E. Morgham, Amira A. El-Fallal, Heba Tolla E. Abo El Naga

Abstract:

Different textile materials are usually used in upholstery. However, upholstery parts may become unhealthy when dust accrues and bacteria raise on the surface, which negatively affects the user's health. Also, leather and artificial leather were used in upholstery but, leather has a high cost and artificial leather has a potential chemical risk for users. Researchers have advanced vegie leather made from bacterial cellulose a symbiotic culture of bacteria and yeast (SCOBY). SCOBY remains a gelatinous, cellulose biofilm discovered floating at the air-liquid interface of the container. But this leather still needs some enhancement for its mechanical properties. This study aimed to prepare SCOBY, produce bamboo rib knitted fabrics with two different stitch densities, and cotton woven fabric then laminate these fabrics with the prepared SCOBY film to enhance the mechanical properties of the SCOBY leather at the same time; add anti-microbial function to the prepared fabrics. Laboratory tests were conducted on the produced samples, including tests for function properties; anti-microbial, thermal conductivity and light transparency. Physical properties; thickness and mass per unit. Mechanical properties; elongation, tensile strength, young modulus, and peel force. The results showed that the type of the fabric affected significantly SCOBY properties. According to the test results, the bamboo knitted fabric with higher stitch density laminated with SCOBY was chosen for its tensile strength and elongation as the upholstery of a bed model with antimicrobial properties and comfortability in the headrest design. Also, the single layer of SCOBY was chosen regarding light transparency and lower thermal conductivity for the creation of a lighting unit built into the bed headboard.

Keywords: anti-microbial, bamboo, rib, SCOPY, upholstery

Procedia PDF Downloads 48
5936 Compatibility of Sulphate Resisting Cement with Super and Hyper-Plasticizer

Authors: Alper Cumhur, Hasan Baylavlı, Eren Gödek

Abstract:

Use of superplasticity chemical admixtures in concrete production is widespread all over the world and has become almost inevitable. Super-plasticizers (SPA), extend the setting time of concrete by adsorbing onto cement particles and provide concrete to preserve its fresh state workability properties. Hyper-plasticizers (HPA), as a special type of superplasticizer, provide the production of qualified concretes by increasing the workability properties of concrete, effectively. However, compatibility of cement with super and hyper-plasticizers is quite important for achieving efficient workability in order to produce qualified concretes. In 2011, the EN 197-1 standard is edited and cement classifications were updated. In this study, the compatibility of hyper-plasticizer and CEM I SR0 type sulphate resisting cement (SRC) that firstly classified in EN 197-1 is investigated. Within the scope of the experimental studies, a reference cement mortar was designed with a water/cement ratio of 0.50 confirming to EN 196-1. Fresh unit density of mortar was measured and spread diameters (at 0, 60, 120 min after mix preparation) and setting time of reference mortar were determined with flow table and Vicat tests, respectively. Three mortars are being re-prepared with using both super and hyper-plasticizer confirming to ASTM C494 by 0.50, 0.75 and 1.00% of cement weight. Fresh unit densities, spread diameters and setting times of super and hyper plasticizer added mortars (SPM, HPM) will be determined. Theoretical air-entrainment values of both SPMs and HPMs will be calculated by taking the differences between the densities of plasticizer added mortars and reference mortar. The flow table and Vicat tests are going to be repeated to these mortars and results will be compared. In conclusion, compatibility of SRC with SPA and HPA will be investigated. It is expected that optimum dosages of SPA and HPA will be determined for providing the required workability and setting conditions of SRC mortars, and the advantages/disadvantages of both SPA and HPA will be discussed.

Keywords: CEM I SR0, hyper-plasticizer, setting time, sulphate resisting cement, super-plasticizer, workability

Procedia PDF Downloads 203
5935 Muslim Women and Gender Justice Facts and Reality: An Indian Scenario

Authors: Asmita A. Vaidya, Shahista S. Inamdar

Abstract:

Society is dynamic, in this changing and development processes, Indian Muslim women where no exception to this social change. Islam has elevated her status from being chattels/commodity to individual human being having separate legal personality and equal to that of men but in India, even two women are not equal in availing their matrimonial rights and remedies, separate personal laws are applicable to them and thus gender justice is a fragile myth.

Keywords: Muslim women, gender justice, polygamy, Islamic jurisprudence, equality

Procedia PDF Downloads 494
5934 Molecular Dynamic Simulation of CO2 Absorption into Mixed Aqueous Solutions MDEA/PZ

Authors: N. Harun, E. E. Masiren, W. H. W. Ibrahim, F. Adam

Abstract:

Amine absorption process is an approach for mitigation of CO2 from flue gas that produces from power plant. This process is the most common system used in chemical and oil industries for gas purification to remove acid gases. On the challenges of this process is high energy requirement for solvent regeneration to release CO2. In the past few years, mixed alkanolamines have received increasing attention. In most cases, the mixtures contain N-methyldiethanolamine (MDEA) as the base amine with the addition of one or two more reactive amines such as PZ. The reason for the application of such blend amine is to take advantage of high reaction rate of CO2 with the activator combined with the advantages of the low heat of regeneration of MDEA. Several experimental and simulation studies have been undertaken to understand this process using blend MDEA/PZ solvent. Despite those studies, the mechanism of CO2 absorption into the aqueous MDEA is not well understood and available knowledge within the open literature is limited. The aim of this study is to investigate the intermolecular interaction of the blend MDEA/PZ using Molecular Dynamics (MD) simulation. MD simulation was run under condition 313K and 1 atm using NVE ensemble at 200ps and NVT ensemble at 1ns. The results were interpreted in term of Radial Distribution Function (RDF) analysis through two system of interest i.e binary and tertiary. The binary system will explain the interaction between amine and water molecule while tertiary system used to determine the interaction between the amine and CO2 molecule. For the binary system, it was observed that the –OH group of MDEA is more attracted to water molecule compared to –NH group of MDEA. The –OH group of MDEA can form the hydrogen bond with water that will assist the solubility of MDEA in water. The intermolecular interaction probability of –OH and –NH group of MDEA with CO2 in blended MDEA/PZ is higher than using single MDEA. This findings show that PZ molecule act as an activator to promote the intermolecular interaction between MDEA and CO2.Thus, blend of MDEA with PZ is expecting to increase the absorption rate of CO2 and reduce the heat regeneration requirement.

Keywords: amine absorption process, blend MDEA/PZ, CO2 capture, molecular dynamic simulation, radial distribution function

Procedia PDF Downloads 272
5933 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 161
5932 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application

Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior

Abstract:

Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.

Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks

Procedia PDF Downloads 139
5931 Study Skills Empowering Strategies to Enhance Second Year Diploma Accountancy Students’ Academic Performance

Authors: Mohamed Karodia

Abstract:

Accountancy as a subject is one of the sciences that for many years has been perceived as a difficult subject to study and teach. Yet, it continuously attracts scholars graduating from school and entering Higher Education Institutions as a subject of choice and career. The teaching and learning of this subject have not been easy and has evolved and progressed over the past few decades however students still find it difficult to study and this has resulted in poor student achievement. In search of solutions, this study has considered the effect and efficacy that study skills have on the performance of Accountancy students and in particular students studying Second Year Diploma in Accountancy at the University of Johannesburg. These students appear to have a lack of appropriate study skills and as a result of these impacts on their performance in the courses, they are studying. This study also focuses on strategies to enhance Second Year Diploma Accountancy students’ academic performance. A literature review was conducted to investigate what scholarly literature suggests about study skills, in general, and in particular for Accountancy to be successful. In order to determine what study skills Second Year Accountancy students are applying when they learn and why they are failing the Accountancy examinations and formal class tests, the study adopted the quantitative research method. A questionnaire addressing various aspects of study skills, studying accountancy and studying, in general, was provided to 800 students studying Second Year Diploma in Accountancy at the University of Johannesburg’s Soweto Campus. The quantitative data collected were analyzed using descriptive statistics in the form of proportions, frequencies, means, and standard deviations, t-tests to compare differences between two groups as well as correlations between variables. Based on the findings of this study, it is recommended that students are provided with courses in time management, procrastination, reading, note taking and writing, test preparation techniques as well as study attitude. Lecturers spend more time teaching students how to study in general as well as accountancy specifically preferably at the first-year level before proceeding to the second year. It is also recommended that the University implements a study skills course to assist the students with studying.

Keywords: accountancy, skills, strategies, study

Procedia PDF Downloads 117
5930 Lactate in Critically Ill Patients an Outcome Marker with Time

Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah

Abstract:

Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≥ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.

Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism

Procedia PDF Downloads 225
5929 A Simple Adaptive Atomic Decomposition Voice Activity Detector Implemented by Matching Pursuit

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

A simple adaptive voice activity detector (VAD) is implemented using Gabor and gammatone atomic decomposition of speech for high Gaussian noise environments. Matching pursuit is used for atomic decomposition, and is shown to achieve optimal speech detection capability at high data compression rates for low signal to noise ratios. The most active dictionary elements found by matching pursuit are used for the signal reconstruction so that the algorithm adapts to the individual speakers dominant time-frequency characteristics. Speech has a high peak to average ratio enabling matching pursuit greedy heuristic of highest inner products to isolate high energy speech components in high noise environments. Gabor and gammatone atoms are both investigated with identical logarithmically spaced center frequencies, and similar bandwidths. The algorithm performs equally well for both Gabor and gammatone atoms with no significant statistical differences. The algorithm achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR and 98% accuracy at a 20dB SNR using 30dB SNR as a reference for voice activity.

Keywords: atomic decomposition, gabor, gammatone, matching pursuit, voice activity detection

Procedia PDF Downloads 274
5928 Interpretable Deep Learning Models for Medical Condition Identification

Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji

Abstract:

Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.

Keywords: deep learning, interpretability, attention, big data, medical conditions

Procedia PDF Downloads 78
5927 Hardware Implementation of Local Binary Pattern Based Two-Bit Transform Motion Estimation

Authors: Seda Yavuz, Anıl Çelebi, Aysun Taşyapı Çelebi, Oğuzhan Urhan

Abstract:

Nowadays, demand for using real-time video transmission capable devices is ever-increasing. So, high resolution videos have made efficient video compression techniques an essential component for capturing and transmitting video data. Motion estimation has a critical role in encoding raw video. Hence, various motion estimation methods are introduced to efficiently compress the video. Low bit‑depth representation based motion estimation methods facilitate computation of matching criteria and thus, provide small hardware footprint. In this paper, a hardware implementation of a two-bit transformation based low-complexity motion estimation method using local binary pattern approach is proposed. Image frames are represented in two-bit depth instead of full-depth by making use of the local binary pattern as a binarization approach and the binarization part of the hardware architecture is explained in detail. Experimental results demonstrate the difference between the proposed hardware architecture and the architectures of well-known low-complexity motion estimation methods in terms of important aspects such as resource utilization, energy and power consumption.

Keywords: binarization, hardware architecture, local binary pattern, motion estimation, two-bit transform

Procedia PDF Downloads 284
5926 The Processing of Context-Dependent and Context-Independent Scalar Implicatures

Authors: Liu Jia’nan

Abstract:

The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.

Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing

Procedia PDF Downloads 299
5925 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic

Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez

Abstract:

The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.

Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis

Procedia PDF Downloads 312
5924 Enhancement of Recycled Concrete Aggregates Properties by Mechanical Treatment and Verification in Concrete Mixes with Replacement up to 100%

Authors: Iveta Nováková, Martin-Andrè S. Husby, Boy-Arne Buyle

Abstract:

The building industry has one of the most significant contributions to global warming due to the production of building materials, transportation, building activities, and demolition of structures when they reach the end of their life. Implementation of circular material flow and circular economy can significantly reduce greenhouse gasses and simultaneously reduce the need for natural resources. The use of recycled concrete aggregates (RCA) is one of the possibilities for reducing the depletion of raw materials for concrete production. Concrete is the most used building material worldwide, and aggregates constitute 70% of its volume. RCA can replace a certain amount of natural aggregates (NA), and concrete will still perform as required. The aim of this scientific paper is to evaluate RCA properties with and without mechanical treatment. Analysis of RCA itself will be followed by compressive strength of concrete containing various amounts of treated and non-treated RCA. Results showed improvement in compressive strength of the mix with mechanically treated RCA compared to standard RCA, and even the strength of concrete with mechanically treated RCA in dose 50% of coarse aggregates was higher than the reference mix by 4%. Based on obtained results, it can be concluded that integration of RCA in industrial concrete production is feasible, at a replacement ratio of 50% for mechanically treated RCA and 30% if untreated RCA is used, without affecting the compressive strength negatively.

Keywords: recycled concrete aggregates, mechanical treatment, aggregate properties, compression strength

Procedia PDF Downloads 218