Search results for: machine performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14491

Search results for: machine performance

8131 Selective Laser Melting (SLM) Process and Its Influence on the Machinability of TA6V Alloy

Authors: Rafał Kamiński, Joel Rech, Philippe Bertrand, Christophe Desrayaud

Abstract:

Titanium alloys are among the most important material in the aircraft industry, due to its low density, high strength, and corrosion resistance. However, these alloys are considered as difficult to machine because they have poor thermal properties and high reactivity with cutting tools. The Selective Laser Melting (SLM) process becomes even more popular through industry since it enables the design of new complex components, that cannot be manufactured by standard processes. However, the high temperature reached during the melting phase as well as the several rapid heating and cooling phases, due to the movement of the laser, induce complex microstructures. These microstructures differ from conventional equiaxed ones obtained by casting+forging. Parts obtained by SLM have to be machined in order calibrate the dimensions and the surface roughness of functional surfaces. The ball milling technique is widely applied to finish complex shapes. However, the machinability of titanium is strongly influenced by the microstructure. So the objective of this work is to investigate the influence of the SLM process, i.e. microstructure, on the machinability of titanium, compared to conventional forming processes. The machinability is analyzed by measuring surface roughness, cutting forces, cutting tool wear for a range of cutting conditions (depth of cut ap, feed per tooth fz, spindle speed N) in accordance with industrial practices.

Keywords: ball milling, microstructure, surface roughness, titanium

Procedia PDF Downloads 280
8130 A Fuzzy-Rough Feature Selection Based on Binary Shuffled Frog Leaping Algorithm

Authors: Javad Rahimipour Anaraki, Saeed Samet, Mahdi Eftekhari, Chang Wook Ahn

Abstract:

Feature selection and attribute reduction are crucial problems, and widely used techniques in the field of machine learning, data mining and pattern recognition to overcome the well-known phenomenon of the Curse of Dimensionality. This paper presents a feature selection method that efficiently carries out attribute reduction, thereby selecting the most informative features of a dataset. It consists of two components: 1) a measure for feature subset evaluation, and 2) a search strategy. For the evaluation measure, we have employed the fuzzy-rough dependency degree (FRFDD) of the lower approximation-based fuzzy-rough feature selection (L-FRFS) due to its effectiveness in feature selection. As for the search strategy, a modified version of a binary shuffled frog leaping algorithm is proposed (B-SFLA). The proposed feature selection method is obtained by hybridizing the B-SFLA with the FRDD. Nine classifiers have been employed to compare the proposed approach with several existing methods over twenty two datasets, including nine high dimensional and large ones, from the UCI repository. The experimental results demonstrate that the B-SFLA approach significantly outperforms other metaheuristic methods in terms of the number of selected features and the classification accuracy.

Keywords: binary shuffled frog leaping algorithm, feature selection, fuzzy-rough set, minimal reduct

Procedia PDF Downloads 201
8129 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 40
8128 Cognitive Effects of Repetitive Transcranial Magnetic Stimulation in Patients with Parkinson's Disease

Authors: Ana Munguia, Gerardo Ortiz, Guadalupe Gonzalez, Fiacro Jimenez

Abstract:

Parkinson's disease (PD) is a neurodegenerative disorder that causes motor and cognitive symptoms. The first-choice treatment for these patients is pharmacological, but this generates several side effects. Because of that new treatments were introduced such as Repetitive Transcranial Magnetic Stimulation (rTMS) in order to improve the life quality of the patients. Several studies suggest significant changes in motor symptoms. However, there is a great diversity in the number of pulses, amplitude, frequency and stimulation targets, which results in inconsistent data. In addition, these studies do not have an analysis of the neuropsychological effects of the treatment. The main purpose of this study is to evaluate the impact of rTMS on the cognitive performance of 6 patients with H&Y III and IV (45-65 years, 3 men and 3 women). An initial neuropsychological and neurological evaluation was performed. Patients were randomized into two groups; in the first phase one received rTMS in the supplementary motor area, the other group in the dorsolateral prefrontal cortex contralateral to the most affected hemibody. In the second phase, each group received the stimulation in the area that he had not been stimulated previously. Reassessments were carried out at the beginning, at the end of each phase and a follow-up was carried out 6 months after the conclusion of the stimulation. In these preliminary results, it is reported that there's no statistically significant difference before and after receiving rTMS in the neuropsychological test scores of the patients, which suggests that the cognitive performance of patients is not detrimental. There are even tendencies towards an improvement in executive functioning after the treatment. What added to motor improvement, showed positive effects in the activities of the patients' daily life. In a later and more detailed analysis, will be evaluated the effects in each of the patients separately in relation to the functionality of the patients in their daily lives.

Keywords: Parkinson's disease, rTMS, cognitive, treatment

Procedia PDF Downloads 131
8127 Taguchi-Based Six Sigma Approach to Optimize Surface Roughness for Milling Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using Six Sigma methodologies to improve the surface roughness of a manufactured part produced by the CNC milling machine. It presents a case study where the surface roughness of milled aluminum is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for a CNC milling process. The six sigma methodology, DMAIC (design, measure, analyze, improve, and control) approach, was applied in this study to improve the process, reduce defects, and ultimately reduce costs. The Taguchi-based six sigma approach was applied to identify the optimized processing parameters that led to the targeted surface roughness specified by our customer. A L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of feed rate, depth of cut, spindle speed, and surface roughness. The noise factor is the difference between the old cutting tool and the new cutting tool. The confirmation run with the optimal parameters confirmed that the new parameter settings are correct. The new settings also improved the process capability index. The purpose of this study is that the Taguchi–based six sigma approach can be efficiently used to phase out defects and improve the process capability index of the CNC milling process.

Keywords: CNC machining, six sigma, surface roughness, Taguchi methodology

Procedia PDF Downloads 231
8126 Alternative of Lead-Based Ionization Radiation Shielding Property: Epoxy-Based Composite Design

Authors: Md. Belal Uudin Rabbi, Sakib Al Montasir, Saifur Rahman, Niger Nahid, Esmail Hossain Emon

Abstract:

The practice of radiation shielding protects against the detrimental effects of ionizing radiation. Radiation shielding depletes radiation by inserting a shield of absorbing material between any radioactive source. It is a primary concern when building several industrial fields, so using potent (high activity) radioisotopes in food preservation, cancer treatment, and particle accelerator facilities is significant. Radiation shielding is essential for radiation-emitting equipment users to reduce or mitigate radiation damage. Polymer composites (especially epoxy based) with high atomic number fillers can replace toxic Lead in ionizing radiation shielding applications because of their excellent mechanical properties, superior solvent and chemical resistance, good dimensional stability, adhesive, and less toxic. Due to being lightweight, good neutron shielding ability in almost the same order as concrete, epoxy-based radiation shielding can be the next big thing. Micro and nano-particles for the epoxy resin increase the epoxy matrix's radiation shielding property. Shielding is required to protect users of such facilities from ionizing radiation as recently, and considerable attention has been paid to polymeric composites as a radiation shielding material. This research will examine the radiation shielding performance of epoxy-based nano-WO3 reinforced composites, exploring the performance of epoxy-based nano-WO3 reinforced composites. The samples will be prepared using the direct pouring method to block radiation. The practice of radiation shielding protects against the detrimental effects of ionizing radiation.

Keywords: radiation shielding materials, ionizing radiation, epoxy resin, Tungsten oxide, polymer composites

Procedia PDF Downloads 92
8125 Consumption of Animal and Vegetable Protein on Muscle Power in Road Cyclists from 18 to 20 Years in Bogota, Colombia

Authors: Oscar Rubiano, Oscar Ortiz, Natalia Morales, Lida Alfonso, Johana Alvarado, Adriana Gutierrez, Daniel Botero

Abstract:

Athletes who usually use protein supplements, are those who practice strength and power sports, whose goal is to achieve a large muscle mass. However, it has also been explored in sports or endurance activities such as cycling, and where despite requiring high power, prominent muscle development can impede good competitive performance due to the determinant of body mass for good performance of the athlete body. This research shows, the effect with protein supplements establishes a protein - muscle mass ratio, although in a lesser proportion the relationship between protein types and muscle power. Thus, we intend to explore as a first approximation, the behavior of muscle power in lower limbs after the intake of two protein supplements from different sources. The aim of the study was to describe the behavior of muscle power in lower limbs after the consumption of animal protein (AP) and vegetable protein (VP) in four route cyclists from 18 to 20 years of the Bogota cycling league. The methodological design of this study is quantitative, with a non-probabilistic sampling, based on a pre-experimental model. The jumping power was evaluated before and after the intervention by means of the squat jump test (SJ), Counter movement jump (CMJ) and Abalacov (AB). Cyclists consumed a drink with whey protein and a soy isolate after training four times a week for three months. The amount of protein in each cyclist, was calculated according to body weight (0.5 g / kg of muscle mass). The results show that subjects who consumed PV improved muscle strength and landing strength. In contrast, the power and landing force decreased for subjects who consumed PA. For the group that consumed PV, the increase was positive at 164.26 watts, 135.70 watts and 33.96 watts for the AB, SJ and CMJ jumps respectively. While for PA, the differences of the medians were negative at -32.29 watts, -82.79 watts and -143.86 watts for the AB, SJ and CMJ jumps respectively. The differences of the medians in the AB jump were positive for both the PV (121.61 Newton) and PA (454.34 Newton) cases, however, the difference was greater for PA. For the SJ jump, the difference for the PA cases was 371.52 Newton, while for the PV cases the difference was negative -448.56 Newton, so the difference was greater in the SJ jump for PA. In jump CMJ, the differences of the medians were negative for the cases of PA and PV, being -7.05 for PA and - 958.2 for PV. So the difference was greater for PA. The conclusion of this study shows that serum protein supplementation showed no improvement in muscle power in the lower limbs of the cyclists studied, which could suggest that whey protein does not have a beneficial effect on performance in terms of power, either, showed an impact on body composition. In contrast, supplementation with soy isolate showed positive effects on muscle power, body.

Keywords: animal protein (AP), muscle power, supplements, vegetable protein (VP)

Procedia PDF Downloads 166
8124 Supported Gold Nanocatalysts for CO Oxidation in Mainstream Cigarette Smoke

Authors: Krasimir Ivanov, Dimitar Dimitrov, Tatyana Tabakova, Stefka Kirkova, Anna Stoilova, Violina Angelova

Abstract:

It has been suggested that nicotine, CO and tar in mainstream smoke are the most important substances and have been judged as the most harmful compounds, responsible for the health hazards of smoking. As nicotine is extremely important for smoking qualities of cigarettes and the tar yield in the tobacco smoke is significantly reduced due to the use of filters with various content and design, the main efforts of cigarettes researchers and manufacturers are related to the search of opportunities for CO content reduction. Highly active ceria supported gold catalyst was prepared by the deposition-precipitation method, and the possibilities for CO oxidation in the synthetic gaseous mixture were evaluated using continuous flow equipment with fixed bed glass reactor at atmospheric pressure. The efficiently of the catalyst in CO oxidation in the real cigarette smoke was examined by a single port, puf-by-puff smoking machine. Quality assessment of smoking using cigarette holder containing catalyst was carried out. It was established that the catalytic activity toward CO oxidation in cigarette smoke rapidly decreases from 70% for the first cigarette to nearly zero for the twentieth cigarette. The present study shows that there are two critical factors which do not permit the successful use of catalysts to reduce the CO content in the mainstream cigarette smoke: (i) significant influence of the processes of adsorption and oxidation on the main characteristics of tobacco products and (ii) rapid deactivation of the catalyst due to the covering of the catalyst’s grains with condensate.

Keywords: cigarette smoke, CO oxidation, gold catalyst, mainstream

Procedia PDF Downloads 205
8123 Photoelectrochemical Water Splitting from Earth-Abundant CuO Thin Film Photocathode: Enhancing Performance and Photo-Stability through Deposition of Overlayers

Authors: Wilman Septina, Rajiv R. Prabhakar, Thomas Moehl, David Tilley

Abstract:

Cupric oxide (CuO) is a promising absorber material for the fabrication of scalable, low cost solar energy conversion devices, due to the high abundance and low toxicity of copper. It is a p-type semiconductor with a band gap of around 1.5 eV, absorbing a significant portion of the solar spectrum. One of the main challenges in using CuO as solar absorber in an aqueous system is its tendency towards photocorrosion, generating Cu2O and metallic Cu. Although there have been several reports of CuO as a photocathode for hydrogen production, it is unclear how much of the observed current actually corresponds to H2 evolution, as the inevitability of photocorrosion is usually not addressed. In this research, we investigated the effect of the deposition of overlayers onto CuO thin films for the purpose of enhancing its photostability as well as performance for water splitting applications. CuO thin film was fabricated by galvanic electrodeposition of metallic copper onto gold-coated FTO substrates, followed by annealing in air at 600 °C. Photoelectrochemical measurement of the bare CuO film using 1 M phosphate buffer (pH 6.9) under simulated AM 1.5 sunlight showed a current density of ca. 1.5 mA cm-2 (at 0.4 VRHE), which photocorroded to Cu metal upon prolonged illumination. This photocorrosion could be suppressed by deposition of 50 nm-thick TiO2, deposited by atomic layer deposition. In addition, we found that insertion of an n-type CdS layer, deposited by chemical bath deposition, between the CuO and TiO2 layers was able to enhance significantly the photocurrent compared to without the CdS layer. A photocurrent of over 2 mA cm-2 (at 0 VRHE) was observed using the photocathode stack FTO/Au/CuO/CdS/TiO2/Pt. Structural, electrochemical, and photostability characterizations of the photocathode as well as results on various overlayers will be presented.

Keywords: CuO, hydrogen, photoelectrochemical, photostability, water splitting

Procedia PDF Downloads 203
8122 A Comparison between Shear Bond Strength of VMK Master Porcelain with Three Base-Metal Alloys (Ni-Cr-T3, Verabond, Super Cast) and One Noble Alloy (X-33) in Metal-Ceramic Restorations

Authors: Ammar Neshati, Elham Hamidi Shishavan

Abstract:

Statement of Problem: The increase in the use of metal-ceramic restorations and a high prevalence of porcelain chipping entails introducing an alloy which is more compatible with porcelain and which causes a stronger bond between the two. This study is to compare shear bond strength of three base-metal alloys and one noble alloy with the common VMK Master Porcelain. Materials and Method: Three different groups of base-metal alloys (Ni-cr-T3, Super Cast, Verabond) and one group of noble alloy (x-33) were selected. The number of alloys in each group was 15. All the groups went through the casting process and change from wax pattern into metal disks. Then, VMK Master Porcelain was fired on each group. All the specimens were put in the UTM and a shear force was loaded until a fracture occurred. The fracture force was then recorded by the machine. The data was subjected to SPSS Version 16 and One-Way ANOVA was run to compare shear strength between the groups. Furthermore, the groups were compared two by two through running Tukey test. Results: The findings of this study revealed that shear bond strength of Ni-Cr-T3 alloy was higher than the three other alloys (94 Mpa or 330 N). Super Cast alloy had the second greatest shear bond strength (80. 87 Mpa or 283.87 N). Both Verabond (69.66 Mpa or 245 N) and x-33 alloys (66.53 Mpa or 234 N) took the third place. Conclusion: Ni-Cr-T3 with VMK Master Porcelain has the greatest shear bond strength. Therefore, the use of this low-cost alloy is recommended in metal-ceramic restorations.

Keywords: shear bond, base-metal alloy, noble alloy, porcelain

Procedia PDF Downloads 473
8121 A Conceptual Framework of the Individual and Organizational Antecedents to Knowledge Sharing

Authors: Muhammad Abdul Basit Memon

Abstract:

The importance of organizational knowledge sharing and knowledge management has been documented in numerous research studies in available literature, since knowledge sharing has been recognized as a founding pillar for superior organizational performance and a source of gaining competitive advantage. Built on this, most of the successful organizations perceive knowledge management and knowledge sharing as a concern of high strategic importance and spend huge amounts on the effective management and sharing of organizational knowledge. However, despite some very serious endeavors, many firms fail to capitalize on the benefits of knowledge sharing because of being unaware of the individual characteristics, interpersonal, organizational and contextual factors that influence knowledge sharing; simply the antecedent to knowledge sharing. The extant literature on antecedents to knowledge sharing, offers a range of antecedents mentioned in a number of research articles and research studies. Some of the previous studies about antecedents to knowledge sharing, studied antecedents to knowledge sharing regarding inter-organizational knowledge transfer; others focused on inter and intra organizational knowledge sharing and still others investigated organizational factors. Some of the organizational antecedents to KS can relate to the characteristics and underlying aspects of knowledge being shared e.g., specificity and complexity of the underlying knowledge to be transferred; others relate to specific organizational characteristics e.g., age and size of the organization, decentralization and absorptive capacity of the firm and still others relate to the social relations and networks of organizations such as social ties, trusting relationships, and value systems. In the same way some researchers have highlighted on only one aspect like organizational commitment, transformational leadership, knowledge-centred culture, learning and performance orientation and social network-based relationships in the organizations. A bulk of the existing research articles on antecedents to knowledge sharing has mainly discussed organizational or environmental factors affecting knowledge sharing. However, the focus, later on, shifted towards the analysis of individuals or personal determinants as antecedents for the individual’s engagement in knowledge sharing activities, like personality traits, attitude and self efficacy etc. For example, employees’ goal orientations (i.e. learning orientation or performance orientation is an important individual antecedent of knowledge sharing behaviour. While being consistent with the existing literature therefore, the antecedents to knowledge sharing can be classified as being individual and organizational. This paper is an endeavor to discuss a conceptual framework of the individual and organizational antecedents to knowledge sharing in the light of the available literature and empirical evidence. This model not only can help in getting familiarity and comprehension on the subject matter by presenting a holistic view of the antecedents to knowledge sharing as discussed in the literature, but can also help the business managers and especially human resource managers to find insights about the salient features of organizational knowledge sharing. Moreover, this paper can help provide a ground for research students and academicians to conduct both qualitative as well and quantitative research and design an instrument for conducting survey on the topic of individual and organizational antecedents to knowledge sharing.

Keywords: antecedents to knowledge sharing, knowledge management, individual and organizational, organizational knowledge sharing

Procedia PDF Downloads 309
8120 Structure-Activity Relationship of Gold Catalysts on Alumina Supported Cu-Ce Oxides for CO and Volatile Organic Compound Oxidation

Authors: Tatyana T. Tabakova, Elitsa N. Kolentsova, Dimitar Y. Dimitrov, Krasimir I. Ivanov, Yordanka G. Karakirova, Petya Cv. Petrova, Georgi V. Avdeev

Abstract:

The catalytic oxidation of CO and volatile organic compounds (VOCs) is considered as one of the most efficient ways to reduce harmful emissions from various chemical industries. The effectiveness of gold-based catalysts for many reactions of environmental significance was proven during the past three decades. The aim of this work was to combine the favorable features of Au and Cu-Ce mixed oxides in the design of new catalytic materials of improved efficiency and economic viability for removal of air pollutants in waste gases from formaldehyde production. Supported oxides of copper and cerium with Cu: Ce molar ratio 2:1 and 1:5 were prepared by wet impregnation of g-alumina. Gold (2 wt.%) catalysts were synthesized by a deposition-precipitation method. Catalysts characterization was carried out by texture measurements, powder X-ray diffraction, temperature programmed reduction and electron paramagnetic resonance spectroscopy. The catalytic activity in the oxidation of CO, CH3OH and (CH3)2O was measured using continuous flow equipment with fixed bed reactor. Both Cu-Ce/alumina samples demonstrated similar catalytic behavior. The addition of gold caused significant enhancement of CO and methanol oxidation activity (100 % degree of CO and CH3OH conversion at about 60 and 140 oC, respectively). The composition of Cu-Ce mixed oxides affected the performance of gold-based samples considerably. Gold catalyst on Cu-Ce/γ-Al2O3 1:5 exhibited higher activity for CO and CH3OH oxidation in comparison with Au on Cu-Ce/γ-Al2O3 2:1. The better performance of Au/Cu-Ce 1:5 was related to the availability of highly dispersed gold particles and copper oxide clusters in close contact with ceria.

Keywords: CO and VOCs oxidation, copper oxide, Ceria, gold catalysts

Procedia PDF Downloads 302
8119 Innovating Electronics Engineering for Smart Materials Marketing

Authors: Muhammad Awais Kiani

Abstract:

The field of electronics engineering plays a vital role in the marketing of smart materials. Smart materials are innovative, adaptive materials that can respond to external stimuli, such as temperature, light, or pressure, in order to enhance performance or functionality. As the demand for smart materials continues to grow, it is crucial to understand how electronics engineering can contribute to their marketing strategies. This abstract presents an overview of the role of electronics engineering in the marketing of smart materials. It explores the various ways in which electronics engineering enables the development and integration of smart features within materials, enhancing their marketability. Firstly, electronics engineering facilitates the design and development of sensing and actuating systems for smart materials. These systems enable the detection and response to external stimuli, providing valuable data and feedback to users. By integrating sensors and actuators into materials, their functionality and performance can be significantly enhanced, making them more appealing to potential customers. Secondly, electronics engineering enables the creation of smart materials with wireless communication capabilities. By incorporating wireless technologies such as Bluetooth or Wi-Fi, smart materials can seamlessly interact with other devices, providing real-time data and enabling remote control and monitoring. This connectivity enhances the marketability of smart materials by offering convenience, efficiency, and improved user experience. Furthermore, electronics engineering plays a crucial role in power management for smart materials. Implementing energy-efficient systems and power harvesting techniques ensures that smart materials can operate autonomously for extended periods. This aspect not only increases their market appeal but also reduces the need for constant maintenance or battery replacements, thus enhancing customer satisfaction. Lastly, electronics engineering contributes to the marketing of smart materials through innovative user interfaces and intuitive control mechanisms. By designing user-friendly interfaces and integrating advanced control systems, smart materials become more accessible to a broader range of users. Clear and intuitive controls enhance the user experience and encourage wider adoption of smart materials in various industries. In conclusion, electronics engineering significantly influences the marketing of smart materials by enabling the design of sensing and actuating systems, wireless connectivity, efficient power management, and user-friendly interfaces. The integration of electronics engineering principles enhances the functionality, performance, and marketability of smart materials, making them more adaptable to the growing demand for innovative and connected materials in diverse industries.

Keywords: electronics engineering, smart materials, marketing, power management

Procedia PDF Downloads 48
8118 Human-factor and Ergonomics in Bottling Lines

Authors: Parameshwaran Nair

Abstract:

Filling and packaging lines for bottling of beverages into glass, PET or aluminum containers require specialized expertise and a different configuration of equipment like – Filler, Warmer, Labeller, Crater/Recrater, Shrink Packer, Carton Erector, Carton Sealer, Date Coder, Palletizer, etc. Over the period of time, the packaging industry has evolved from manually operated single station machines to highly automized high-speed lines. Human factor and ergonomics have gained significant consideration in this course of transformation. A pre-requisite for such bottling lines, irrespective of the container type and size, is to be suitable for multi-format applications. It should also be able to handle format changeovers with minimal adjustment. It should have variable capacity and speeds, for providing great flexibility of use in managing accumulation times as a function of production characteristics. In terms of layout as well, it should demonstrate flexibility for operator movement and access to machine areas for maintenance. Packaging technology during the past few decades has risen to these challenges by a series of major breakthroughs interspersed with periods of refinement and improvement. The milestones are many and varied and are described briefly in this paper. In order to have a brief understanding of the human factor and ergonomics in the modern packaging lines, this paper, highlights the various technologies, design considerations and statutory requirements in packaging equipment for different types of containers used in India.

Keywords: human-factor, ergonomics, bottling lines, automized high-speed lines

Procedia PDF Downloads 415
8117 Major Depressive Disorder: Diagnosis based on Electroencephalogram Analysis

Authors: Wajid Mumtaz, Aamir Saeed Malik, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin

Abstract:

In this paper, a technique based on electroencephalogram (EEG) analysis is presented, aiming for diagnosing major depressive disorder (MDD) among a potential population of MDD patients and healthy controls. EEG is recognized as a clinical modality during applications such as seizure diagnosis, index for anesthesia, detection of brain death or stroke. However, its usability for psychiatric illnesses such as MDD is less studied. Therefore, in this study, for the sake of diagnosis, 2 groups of study participants were recruited, 1) MDD patients, 2) healthy people as controls. EEG data acquired from both groups were analyzed involving inter-hemispheric asymmetry and composite permutation entropy index (CPEI). To automate the process, derived quantities from EEG were utilized as inputs to classifier such as logistic regression (LR) and support vector machine (SVM). The learning of these classification models was tested with a test dataset. Their learning efficiency is provided as accuracy of classifying MDD patients from controls, their sensitivities and specificities were reported, accordingly (LR =81.7 % and SVM =81.5 %). Based on the results, it is concluded that the derived measures are indicators for diagnosing MDD from a potential population of normal controls. In addition, the results motivate further exploring other measures for the same purpose.

Keywords: major depressive disorder, diagnosis based on EEG, EEG derived features, CPEI, inter-hemispheric asymmetry

Procedia PDF Downloads 533
8116 Instructional Game in Teaching Algebra for High School Students: Basis for Instructional Intervention

Authors: Jhemson C. Elis, Alvin S. Magadia

Abstract:

Our world is full of numbers, shapes, and figures that illustrate the wholeness of a thing. Indeed, this statement signifies that mathematics is everywhere. Mathematics in its broadest sense helps people in their everyday life that is why in education it is a must to be taken by the students as a subject. The study aims to determine the profile of the respondents in terms of gender and age, performance of the control and experimental groups in the pretest and posttest, impact of the instructional game used as instructional intervention in teaching algebra for high school students, significant difference between the level of performance of the two groups of respondents in their pre–test and post–test results, and the instructional intervention can be proposed. The descriptive method was also utilized in this study. The use of the certain approach was to that it corresponds to the main objective of this research that is to determine the effectiveness of the instructional game used as an instructional intervention in teaching algebra for high school students. There were 30 students served as respondents, having an equal size of the sample of 15 each while a greater number of female teacher respondents which totaled 7 or 70 percent and male were 3 or 30 percent. The study recommended that mathematics teacher should conceptualize instructional games for the students to learn mathematics with fun and enjoyment while learning. Mathematics education program supervisor should give training for teachers on how to conceptualize mathematics intervention for the students learning. Meaningful activities must be provided to sustain the student’s interest in learning. Students must be given time to have fun at the classroom through playing while learning since mathematics for them was considered as difficult. Future researcher must continue conceptualizing some mathematics intervention to suffice the needs of the students, and teachers should inculcate more educational games so that the discussion will be successful and joyful.

Keywords: instructional game in algebra, mathematical intervention, joyful, successful

Procedia PDF Downloads 584
8115 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier

Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez

Abstract:

The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.

Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses

Procedia PDF Downloads 46
8114 A Review on the Hydrologic and Hydraulic Performances in Low Impact Development-Best Management Practices Treatment Train

Authors: Fatin Khalida Abdul Khadir, Husna Takaijudin

Abstract:

Bioretention system is one of the alternatives to approach the conventional stormwater management, low impact development (LID) strategy for best management practices (BMPs). Incorporating both filtration and infiltration, initial research on bioretention systems has shown that this practice extensively decreases runoff volumes and peak flows. The LID-BMP treatment train is one of the latest LID-BMPs for stormwater treatments in urbanized watersheds. The treatment train is developed to overcome the drawbacks that arise from conventional LID-BMPs and aims to enhance the performance of the existing practices. In addition, it is also used to improve treatments in both water quality and water quantity controls as well as maintaining the natural hydrology of an area despite the current massive developments. The objective of this paper is to review the effectiveness of the conventional LID-BMPS on hydrologic and hydraulic performances through column studies in different configurations. The previous studies on the applications of LID-BMP treatment train that were developed to overcome the drawbacks of conventional LID-BMPs are reviewed and use as the guidelines for implementing this system in Universiti Teknologi Petronas (UTP) and elsewhere. The reviews on the analysis conducted for hydrologic and hydraulic performances using the artificial neural network (ANN) model are done in order to be utilized in this study. In this study, the role of the LID-BMP treatment train is tested by arranging bioretention cells in series in order to be implemented for controlling floods that occurred currently and in the future when the construction of the new buildings in UTP completed. A summary of the research findings on the performances of the system is provided which includes the proposed modifications on the designs.

Keywords: bioretention system, LID-BMP treatment train, hydrological and hydraulic performance, ANN analysis

Procedia PDF Downloads 109
8113 Graphene-reinforced Metal-organic Framework Derived Cobalt Sulfide/Carbon Nanocomposites as Efficient Multifunctional Electrocatalysts

Authors: Yongde Xia, Laicong Deng, Zhuxian Yang

Abstract:

Developing cost-effective electrocatalysts for oxygen reduction reaction (ORR), oxygen evolution reaction (OER) and hydrogen evolution reaction (HER) is vital in energy conversion and storage applications. Herein, we report a simple method for the synthesis of graphene-reinforced cobalt sulfide/carbon nanocomposites and the evaluation of their electrocatalytic performance for typical electrocatalytic reactions. Nanocomposites of cobalt sulfide embedded in N, S co-doped porous carbon and graphene (CoS@C/Graphene) were generated via simultaneous sulfurization and carbonization of one-pot synthesized graphite oxide-ZIF-67 precursors. The obtained CoS@C/Graphene nanocomposite was characterized by X-ray diffraction, Raman spectroscopy, Thermogravimetric analysis-Mass spectroscopy, Scanning electronic microscopy, Transmission electronic microscopy, X-ray photoelectron spectroscopy and gas sorption. It was found that cobalt sulfide nanoparticles were homogenously dispersed in the in-situ formed N, S co-doped porous carbon/Graphene matrix. The CoS@C/10Graphene composite not only shows excellent electrocatalytic activity toward ORR with high onset potential of 0.89 V, four-electron pathway and superior durability of maintaining 98% current after continuously running for around 5 hours, but also exhibits good performance for OER and HER, due to the improved electrical conductivity, increased catalytic active sites and connectivity between the electrocatalytic active cobalt sulfide and the carbon matrix. This work offers a new approach for the development of novel multifunctional nanocomposites for the next generation of energy conversion and storage applications.

Keywords: MOF derivative, graphene, electrocatalyst, oxygen reduction reaction, oxygen evolution reaction, hydrogen evolution reaction

Procedia PDF Downloads 37
8112 Development of Cost Effective Ultra High Performance Concrete by Using Locally Available Materials

Authors: Mohamed Sifan, Brabha Nagaratnam, Julian Thamboo, Keerthan Poologanathan

Abstract:

Ultra high performance concrete (UHPC) is a type of cementitious material known for its exceptional strength, ductility, and durability. However, its production is often associated with high costs due to the significant amount of cementitious materials required and the use of fine powders to achieve the desired strength. The aim of this research is to explore the feasibility of developing cost-effective UHPC mixes using locally available materials. Specifically, the study aims to investigate the use of coarse limestone sand along with other sand types, namely, basalt sand, dolomite sand, and river sand for developing UHPC mixes and evaluating its performances. The study utilises the particle packing model to develop various UHPC mixes. The particle packing model involves optimising the combination of coarse limestone sand, basalt sand, dolomite sand, and river sand to achieve the desired properties of UHPC. The developed UHPC mixes are then evaluated based on their workability (measured through slump flow and mini slump value), compressive strength (at 7, 28, and 90 days), splitting tensile strength, and microstructural characteristics analysed through scanning electron microscope (SEM) analysis. The results of this study demonstrate that cost-effective UHPC mixes can be developed using locally available materials without the need for silica fume or fly ash. The UHPC mixes achieved impressive compressive strengths of up to 149 MPa at 28 days with a cement content of approximately 750 kg/m³. The mixes also exhibited varying levels of workability, with slump flow values ranging from 550 to 850 mm. Additionally, the inclusion of coarse limestone sand in the mixes effectively reduced the demand for superplasticizer and served as a filler material. By exploring the use of coarse limestone sand and other sand types, this study provides valuable insights into optimising the particle packing model for UHPC production. The findings highlight the potential to reduce costs associated with UHPC production without compromising its strength and durability. The study collected data on the workability, compressive strength, splitting tensile strength, and microstructural characteristics of the developed UHPC mixes. Workability was measured using slump flow and mini slump tests, while compressive strength and splitting tensile strength were assessed at different curing periods. Microstructural characteristics were analysed through SEM and energy dispersive X-ray spectroscopy (EDS) analysis. The collected data were then analysed and interpreted to evaluate the performance and properties of the UHPC mixes. The research successfully demonstrates the feasibility of developing cost-effective UHPC mixes using locally available materials. The inclusion of coarse limestone sand, in combination with other sand types, shows promising results in achieving high compressive strengths and satisfactory workability. The findings suggest that the use of the particle packing model can optimise the combination of materials and reduce the reliance on expensive additives such as silica fume and fly ash. This research provides valuable insights for researchers and construction practitioners aiming to develop cost-effective UHPC mixes using readily available materials and an optimised particle packing approach.

Keywords: cost-effective, limestone powder, particle packing model, ultra high performance concrete

Procedia PDF Downloads 81
8111 The Effects of Dynamic Training Shoes Exercises on Isokinetic Strength Performance

Authors: Bergun Meric Bingul, Yezdan Cinel, Murat Son, Cigdem Bulgan, Mensure Aydin

Abstract:

The aim of this study was to determination of the effects of knee and hip isokinetic performance during the training with the special designed roller-shoes. 30 soccer players participated as subjects and these subjects were divided into 3 groups randomly. Training groups were; with the dynamic training shoes group, without the dynamic training shoes group and control group. Subjects were trained speed strength trainings during 8 weeks (3 days a week and 1 hour a day). 6 exercises were focused on the knee flexors and extensors, also hip adductor and abductor muscles were chosen and performed in 3x30secs at each sets. Control group was not paticipated to the training program. Before and after the training programs knee flexor and extensor muscles and hip abductor and adductor muscles’ peak torques were measured by Biodex III isokinetic dynamometer. Isokinetic strength data were analyzed by using SPSS program. A repeated measures analysis of variance (ANOVA) was used to determine differences among the peak torque values for three groups. The results indicated that soccer players’ peak torque values that the group of using the dynamic training shoes, were found higher. Also, hip adductor and abductor peak torques that the group of using the dynamic training shoes, were obtained better than the other groups. In conclusion, the ground friction forces are an important role of increasing strength. With these shoes, using rollers, soccer players were able to move easily because of the friction forces were reduced and created more range of motion. So, exercises were performed faster than before and strength movements in all angles, it ensured that the active state. This was resulted in a better use of force.

Keywords: isokinetic, soccer, dynamic training shoes, training

Procedia PDF Downloads 254
8110 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 64
8109 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality

Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard

Abstract:

Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.

Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus

Procedia PDF Downloads 561
8108 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 114
8107 Analysis of Thermal Comfort in Educational Buildings Using Computer Simulation: A Case Study in Federal University of Parana, Brazil

Authors: Ana Julia C. Kfouri

Abstract:

A prerequisite of any building design is to provide security to the users, taking the climate and its physical and physical-geometrical variables into account. It is also important to highlight the relevance of the right material elements, which arise between the person and the agent, and must provide improved thermal comfort conditions and low environmental impact. Furthermore, technology is constantly advancing, as well as computational simulations for projects, and they should be used to develop sustainable building and to provide higher quality of life for its users. In relation to comfort, the more satisfied the building users are, the better their intellectual performance will be. Based on that, the study of thermal comfort in educational buildings is of relative relevance, since the thermal characteristics in these environments are of vital importance to all users. Moreover, educational buildings are large constructions and when they are poorly planned and executed they have negative impacts to the surrounding environment, as well as to the user satisfaction, throughout its whole life cycle. In this line of thought, to evaluate university classroom conditions, it was accomplished a detailed case study on the thermal comfort situation at Federal University of Parana (UFPR). The main goal of the study is to perform a thermal analysis in three classrooms at UFPR, in order to address the subjective and physical variables that influence thermal comfort inside the classroom. For the assessment of the subjective components, a questionnaire was applied in order to evaluate the reference for the local thermal conditions. Regarding the physical variables, it was carried out on-site measurements, which consist of performing measurements of air temperature and air humidity, both inside and outside the building, as well as meteorological variables, such as wind speed and direction, solar radiation and rainfall, collected from a weather station. Then, a computer simulation based on results from the EnergyPlus software to reproduce air temperature and air humidity values of the three classrooms studied was conducted. The EnergyPlus outputs were analyzed and compared with the on-site measurement results to be possible to come out with a conclusion related to the local thermal conditions. The methodological approach included in the study allowed a distinct perspective in an educational building to better understand the classroom thermal performance, as well as the reason of such behavior. Finally, the study induces a reflection about the importance of thermal comfort for educational buildings and propose thermal alternatives for future projects, as well as a discussion about the significant impact of using computer simulation on engineering solutions, in order to improve the thermal performance of UFPR’s buildings.

Keywords: computer simulation, educational buildings, EnergyPlus, humidity, temperature, thermal comfort

Procedia PDF Downloads 373
8106 Performance Analysis of Search Medical Imaging Service on Cloud Storage Using Decision Trees

Authors: González A. Julio, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

Telemedicine services use a large amount of data, most of which are diagnostic images in Digital Imaging and Communications in Medicine (DICOM) and Health Level Seven (HL7) formats. Metadata is generated from each related image to support their identification. This study presents the use of decision trees for the optimization of information search processes for diagnostic images, hosted on the cloud server. To analyze the performance in the server, the following quality of service (QoS) metrics are evaluated: delay, bandwidth, jitter, latency and throughput in five test scenarios for a total of 26 experiments during the loading and downloading of DICOM images, hosted by the telemedicine group server of the Universidad Militar Nueva Granada, Bogotá, Colombia. By applying decision trees as a data mining technique and comparing it with the sequential search, it was possible to evaluate the search times of diagnostic images in the server. The results show that by using the metadata in decision trees, the search times are substantially improved, the computational resources are optimized and the request management of the telemedicine image service is improved. Based on the experiments carried out, search efficiency increased by 45% in relation to the sequential search, given that, when downloading a diagnostic image, false positives are avoided in management and acquisition processes of said information. It is concluded that, for the diagnostic images services in telemedicine, the technique of decision trees guarantees the accessibility and robustness in the acquisition and manipulation of medical images, in improvement of the diagnoses and medical procedures in patients.

Keywords: cloud storage, decision trees, diagnostic image, search, telemedicine

Procedia PDF Downloads 192
8105 Studying the Bond Strength of Geo-Polymer Concrete

Authors: Rama Seshu Doguparti

Abstract:

This paper presents the experimental investigation on the bond behavior of geo polymer concrete. The bond behavior of geo polymer concrete cubes of grade M35 reinforced with 16 mm TMT rod is analyzed. The results indicate that the bond performance of reinforced geo polymer concrete is good and thus proves its application for construction.

Keywords: geo-polymer, concrete, bond strength, behaviour

Procedia PDF Downloads 492
8104 Eight Weeks of Suspension Systems Training on Fat Mass, Jump and Physical Fitness Index in Female

Authors: Che Hsiu Chen, Su Yun Chen, Hon Wen Cheng

Abstract:

Greater core stability may benefit sports performance by providing a foundation for greater force production in the upper and lower extremities. Core stability exercises on instability device (such as the TRX suspension systems) were found to be able to induce higher core muscle activity than performing on a stable surface. However, high intensity interval TRX suspension exercises training on sport performances remain unclear. The purpose of this study was to examine whether high intensity TRX suspension training could improve sport performance. Twenty-four healthy university female students (age 19.0 years, height 157.9 cm, body mass 51.3 kg, fat mass 25.2 %) were voluntarily participated in this study. After a familiarization session, each participant underwent five suspension exercises (e.g., hip abduction in plank alternative, hamstring curl, 45-degree row, lunge and oblique crunch). Each type of exercise was performed for 30 seconds, followed by 30 seconds break, two times per week for eight weeks while each exercise session was increased by 10 seconds every week. The results showed that the fat mass (about 12.92%) decreased significantly, sit and reach test (9%), 1 minute sit-up test (17.5%), standing broad jump (4.8%), physical fitness index (10.3%) increased significantly after 8-week high intensity TRX suspension training. Hence, eight weeks of high intensity interval TRX suspension exercises training can improve hamstring flexibility, trunk endurance, jump ability, aerobic fitness and fat mass percentage decreased substantially.

Keywords: core endurance, jump, flexibility, cardiovascular fitness

Procedia PDF Downloads 394
8103 Seismic Assessment of Passive Control Steel Structure with Modified Parameter of Oil Damper

Authors: Ahmad Naqi

Abstract:

Today, the passively controlled buildings are extensively becoming popular due to its excellent lateral load resistance circumstance. Typically, these buildings are enhanced with a damping device that has high market demand. Some manufacturer falsified the damping device parameter during the production to achieve the market demand. Therefore, this paper evaluates the seismic performance of buildings equipped with damping devices, which their parameter modified to simulate the falsified devices, intentionally. For this purpose, three benchmark buildings of 4-, 10-, and 20-story were selected from JSSI (Japan Society of Seismic Isolation) manual. The buildings are special moment resisting steel frame with oil damper in the longitudinal direction only. For each benchmark buildings, two types of structural elements are designed to resist the lateral load with and without damping devices (hereafter, known as Trimmed & Conventional Building). The target building was modeled using STERA-3D, a finite element based software coded for study purpose. Practicing the software one can develop either three-dimensional Model (3DM) or Lumped Mass model (LMM). Firstly, the seismic performance of 3DM and LMM models was evaluated and found excellent coincide for the target buildings. The simplified model of LMM used in this study to produce 66 cases for both of the buildings. Then, the device parameters were modified by ± 40% and ±20% to predict many possible conditions of falsification. It is verified that the building which is design to sustain the lateral load with support of damping device (Trimmed Building) are much more under threat as a result of device falsification than those building strengthen by damping device (Conventional Building).

Keywords: passive control system, oil damper, seismic assessment, lumped mass model

Procedia PDF Downloads 104
8102 Enhancing Students’ Performance in Basic Science and Technology in Nigeria Using Moodle LMS

Authors: Olugbade Damola, Adekomi Adebimbo, Sofowora Olaniyi Alaba

Abstract:

One of the major problems facing education in Nigeria is the provision of quality Science and Technology education. Inadequate teaching facilities, non-usage of innovative teaching strategies, ineffective classroom management, lack of students’ motivation and poor integration of ICT has resulted in the increase in percentage of students who failed Basic Science and Technology in Junior Secondary Certification Examination for National Examination Council in Nigeria. To address these challenges, the Federal Government came up with a road map on education. This was with a view of enhancing quality education through integration of modern technology into teaching and learning, enhancing quality assurance through proper monitoring and introduction of innovative methods of teaching. This led the researcher to investigate how MOODLE LMS could be used to enhance students’ learning outcomes in BST. A sample of 120 students was purposively selected from four secondary schools in Ogbomoso. The experimental group was taught using MOODLE LMS, while the control group was taught using the conventional method. Data obtained were analyzed using mean, standard deviation and t-test. The result showed that MOODLE LMS was an effective learning platform in teaching BST in junior secondary schools (t=4.953, P<0.05). Students’ attitudes towards BST was also enhanced through MOODLE LMS (t=15.632, P<0.05). The use of MOODLE LMS significantly enhanced students’ retention (t=6.640, P<0.05). In conclusion, the Federal Government efforts at enhancing quality assurance through integration of modern technology and e-learning in Secondary schools proved to have yielded good result has students found MOODLE LMS to be motivating and interactive. Attendance was improved.

Keywords: basic science and technology, MOODLE LMS, performance, quality assurance

Procedia PDF Downloads 286