Search results for: toxicity testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3976

Search results for: toxicity testing

256 Effect of Compaction Method on the Mechanical and Anisotropic Properties of Asphalt Mixtures

Authors: Mai Sirhan, Arieh Sidess

Abstract:

Asphaltic mixture is a heterogeneous material composed of three main components: aggregates; bitumen and air voids. The professional experience and scientific literature categorize asphaltic mixture as a viscoelastic material, whose behavior is determined by temperature and loading rate. Properties characterization of the asphaltic mixture used under the service conditions is done by compacting and testing cylindric asphalt samples in the laboratory. These samples must resemble in a high degree internal structure of the mixture achieved in service, and the mechanical characteristics of the compacted asphalt layer in the pavement. The laboratory samples are usually compacted in temperatures between 140 and 160 degrees Celsius. In this temperature range, the asphalt has a low degree of strength. The laboratory samples are compacted using the dynamic or vibrational compaction methods. In the compaction process, the aggregates tend to align themselves in certain directions that lead to anisotropic behavior of the asphaltic mixture. This issue has been studied in the Strategic Highway Research Program (SHRP) research, that recommended using the gyratory compactor based on the assumption that this method is the best in mimicking the compaction in the service. In Israel, the Netivei Israel company is considering adopting the Gyratory Method as a replacement for the Marshall method used today. Therefore, the compatibility of the Gyratory Method for the use with Israeli asphaltic mixtures should be investigated. In this research, we aimed to examine the impact of the compaction method used on the mechanical characteristics of the asphaltic mixtures and to evaluate the degree of anisotropy in relation to the compaction method. In order to carry out this research, samples have been compacted in the vibratory and gyratory compactors. These samples were cylindrically cored both vertically (compaction wise) and horizontally (perpendicular to compaction direction). These models were tested under dynamic modulus and permanent deformation tests. The comparable results of the tests proved that: (1) specimens compacted by the vibratory compactor had higher dynamic modulus values than the specimens compacted by the gyratory compactor (2) both vibratory and gyratory compacted specimens had anisotropic behavior, especially in high temperatures. Also, the degree of anisotropy is higher in specimens compacted by the gyratory method. (3) Specimens compacted by the vibratory method that were cored vertically had the highest resistance to rutting. On the other hand, specimens compacted by the vibratory method that were cored horizontally had the lowest resistance to rutting. Additionally (4) these differences between the different types of specimens rise mainly due to the different internal arrangement of aggregates resulting from the compaction method. (5) Based on the initial prediction of the performance of the flexible pavement containing an asphalt layer having characteristics based on the results achieved in this research. It can be concluded that there is a significant impact of the compaction method and the degree of anisotropy on the strains that develop in the pavement, and the resistance of the pavement to fatigue and rutting defects.

Keywords: anisotropy, asphalt compaction, dynamic modulus, gyratory compactor, mechanical properties, permanent deformation, vibratory compactor

Procedia PDF Downloads 121
255 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project

Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba

Abstract:

Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.

Keywords: architecture, interdisciplinary, research, studio, students, wood

Procedia PDF Downloads 313
254 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 89
253 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 81
252 Developing a High Performance Cement Based Material: The Influence of Silica Fume and Organosilane

Authors: Andrea Cretu, Calin Cadar, Maria Miclaus, Lucian Barbu-Tudoran, Siegfried Stapf, Ioan Ardelean

Abstract:

Additives and mineral admixtures have become an integral part of cement-based materials. It is common practice to add silica fume to cement based mixes in order to produce high-performance concrete. There is still a lack of scientific understanding regarding the effects that silica fume has on the microstructure of hydrated cement paste. The aim of the current study is to develop high-performance materials with low permeability and high resistance to flexural stress using silica fume and an organosilane. Organosilane bonds with cement grains and silica fume, influencing both the workability and the final properties of the mix, especially the pore size distributions and pore connectivity. Silica fume is a known pozzolanic agent which reacts with the calcium hydroxide in hydrated cement paste, producing more C-S-H and improving the mechanical properties of the mix. It is believed that particles of silica fume act as capillary pore fillers and nucleation centers for C-S-H and other hydration products. In order to be able to design cement-based materials with added silica fume and organosilane, it is necessary first to understand the formation of the porous network during hydration and to observe the distribution of pores and their connectivity. Nuclear magnetic resonance (NMR) methods in low-fields are non-destructive and allow the study of cement-based materials from the standpoint of their porous structure. Other methods, such as XRD and SEM-EDS, help create a comprehensive picture of the samples, along with the classic mechanical tests (compressive and flexural strength measurements). The transverse relaxation time (T₂) was measured during the hydration of 16 samples prepared with two water/cement ratios (0.3 and 0.4) and different concentrations or organosilane (APTES, up to 2% by mass of cement) and silica fume (up to 6%). After their hydration, the pore size distribution was assessed using the same NMR approach on the samples filled with cyclohexane. The SEM-EDS and XRD measurements were applied on pieces and powders prepared from the samples that were used in mechanical testing, which were kept under water for 28 days. Adding silica fume does not influence the hydration dynamics of cement paste, while the addition of organosilane extends the dormancy stage up to 10 hours. The size distribution of the capillary pores is not influenced by the addition of silica fume or organosilane, while the connectivity of capillary pores is decreased only when there is organosilane in the mix. No filling effect is observed even at the highest concentration of silica fume. There is an apparent increase in flexural strength of samples prepared only with silica fume and a decrease for those prepared with organosilane, with a few exceptions. XRD reveals that the pozzolanic reactivity of silica fume can only be observed when there is no organosilane present and the SEM-EDS method reveals the pore distribution, as well as hydration products and the presence or absence of calcium hydroxide. The current work was funded by the Romanian National Authority for Scientific Research, CNCS – UEFISCDI, through project PN-III-P2-2.1-PED-2016-0719.

Keywords: cement hydration, concrete admixtures, NMR, organosilane, porosity, silica fume

Procedia PDF Downloads 165
251 Methods for Early Detection of Invasive Plant Species: A Case Study of Hueston Woods State Nature Preserve

Authors: Suzanne Zazycki, Bamidele Osamika, Heather Craska, Kaelyn Conaway, Reena Murphy, Stephanie Spence

Abstract:

Invasive Plant Species (IPS) are an important component of effective preservation and conservation of natural lands management. IPS are non-native plants which can aggressively encroach upon native species and pose a significant threat to the ecology, public health, and social welfare of a community. The presence of IPS in U.S. nature preserves has caused economic costs, which has estimated to exceed $26 billion a year. While different methods have been identified to control IPS, few methods have been recognized for early detection of IPS. This study examined identified methods for early detection of IPS in Hueston Woods State Nature Preserve. Mixed methods research design was adopted in this four-phased study. The first phase entailed data gathering, the phase described the characteristics and qualities of IPS and the importance of early detection (ED). The second phase explored ED methods, Geographic Information Systems (GIS) and Citizen Science were discovered as ED methods for IPS. The third phase of the study involved the creation of hotspot maps to identify likely areas for IPS growth. While the fourth phase involved testing and evaluating mobile applications that can support the efforts of citizen scientists in IPS detection. Literature reviews were conducted on IPS and ED methods, and four regional experts from ODNR and Miami University were interviewed. A questionnaire was used to gather information about ED methods used across the state. The findings revealed that geospatial methods, including Unmanned Aerial Vehicles (UAVs), Multispectral Satellites (MSS), and Normalized Difference Vegetation Index (NDVI), are not feasible for early detection of IPS, as they require GIS expertise, are still an emerging technology, and are not suitable for every habitat for the ED of IPS. Therefore, Other ED methods options were explored, which include predicting areas where IPS will grow, which can be done through monitoring areas that are like the species’ native habitat. Through literature review and interviews, IPS are known to grow in frequently disturbed areas such as along trails, shorelines, and streambanks. The research team called these areas “hotspots” and created maps of these hotspots specifically for HW NP to support and narrow the efforts of citizen scientists and staff in the ED of IPS. The results further showed that utilizing citizen scientists in the ED of IPS is feasible, especially through single day events or passive monitoring challenges. The study concluded that the creation of hotspot maps to direct the efforts of citizen scientists are effective for the early detection of IPS. Several recommendations were made, among which is the creation of hotspot maps to narrow the ED efforts as citizen scientists continues to work in the preserves and utilize citizen science volunteers to identify and record emerging IPS.

Keywords: early detection, hueston woods state nature preserve, invasive plant species, hotspots

Procedia PDF Downloads 108
250 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant

Authors: Weiam Daear, Patrick Lai, Elmar Prenner

Abstract:

The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.

Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles

Procedia PDF Downloads 182
249 Testing of Infill Walls with Joint Reinforcement Subjected to in Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frame subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed-joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. A confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame and the aspect ratio of the wall. All cases included tie-columns and tie-beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frame with identical characteristic to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls: this type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, Infill wall, Infilled frame, masonry wall

Procedia PDF Downloads 79
248 Bisphenol-A Concentrations in Urine and Drinking Water Samples of Adults Living in Ankara

Authors: Hasan Atakan Sengul, Nergis Canturk, Bahar Erbas

Abstract:

Drinking water is indispensable for life. With increasing awareness of communities, the content of drinking water and tap water has been a matter of curiosity. The presence of Bisphenol-A is the top one when content curiosity is concerned. The most used chemical worldwide for production of polycarbonate plastics and epoxy resins is Bisphenol-A. People are exposed to Bisphenol-A chemical, which disrupts the endocrine system, almost every day. Each year it is manufactured an average of 5.4 billion kilograms of Bisphenol-A. Linear formula of Bisphenol-A is (CH₃)₂C(C₆H₄OH)₂, its molecular weight is 228.29 and CAS number is 80-05-7. Bisphenol-A is known to be used in the manufacturing of plastics, along with various chemicals. Bisphenol-A, an industrial chemical, is used in the raw materials of packaging mate-rials in the monomers of polycarbonate and epoxy resins. The pass through the nutrients of Bisphenol-A substance happens by packaging. This substance contaminates with nutrition and penetrates into body by consuming. International researches show that BPA is transported through body fluids, leading to hormonal disorders in animals. Experimental studies on animals report that BPA exposure also affects the gender of the newborn and its time to reach adolescence. The extent to what similar endocrine disrupting effects are on humans is a debate topic in many researches. In our country, detailed studies on BPA have not been done. However, it is observed that 'BPA-free' phrases are beginning to appear on plastic packaging such as baby products and water carboys. Accordingly, this situation increases the interest of the society about the subject; yet it causes information pollution. In our country, all national and international studies on exposure to BPA have been examined and Ankara province has been designated as testing region. To assess the effects of plastic use in daily habits of people and the plastic amounts removed out of the body, the results of the survey conducted with volunteers who live in Ankara has been analyzed with Sciex appliance by means of LC-MS/MS in the laboratory and the amount of exposure and BPA removal have been detected by comparing the results elicited before. The results have been compared with similar studies done in international arena and the relation between them has been exhibited. Consequently, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine. This has also revealed that environmental exposure and the habits of daily plastic use have also direct effects a human body. When the amount of BPA in drinking water is considered; minimum 0.028 µg/L, maximum 1.136 µg/L, mean 0.29194 µg/L and SD(standard deviation)= 0.199 have been detected. When the amount of BPA in urine is considered; minimum 0.028 µg/L, maximum 0.48 µg/L, mean 0.19181 µg/L and SD= 0.099 have been detected. In conclusion, there has been found no linear correlation between the amount of BPA in drinking water and the amount of BPA in urine (r= -0.151). The p value of the comparison between drinking water’s and urine’s BPA amounts is 0.004 which shows that there is a significant change and the amounts of BPA in urine is dependent on the amounts in drinking waters (p < 0.05). This has revealed that environmental exposure and daily plastic habits have also direct effects on the human body.

Keywords: analyze of bisphenol-A, BPA, BPA in drinking water, BPA in urine

Procedia PDF Downloads 132
247 Paramedic Strength and Flexibility: Findings of a 6-Month Workplace Exercise Randomised Controlled Trial

Authors: Jayden R. Hunter, Alexander J. MacQuarrie, Samantha C. Sheridan, Richard High, Carolyn Waite

Abstract:

Workplace exercise programs have been recommended to improve the musculoskeletal fitness of paramedics with the aim of reducing injury rates, and while they have shown efficacy in other occupations, they have not been delivered and evaluated in Australian paramedics to our best knowledge. This study investigated the effectiveness of a 6-month workplace exercise program (MedicFit; MF) to improve paramedic fitness with or without health coach (HC) support. A group of regional Australian paramedics (n=76; 43 male; mean ± SD 36.5 ± 9.1 years; BMI 28.0 ± 5.4 kg/m²) were randomised at the station level to either exercise with remote health coach support (MFHC; n=30), exercise without health coach support (MF; n=23), or no-exercise control (CON; n=23) groups. MFHC and MF participants received a 6-month, low-moderate intensity resistance and flexibility exercise program to be performed ƒ on station without direct supervision. Available exercise equipment included dumbbells, resistance bands, Swiss balls, medicine balls, kettlebells, BOSU balls, yoga mats, and foam rollers. MFHC and MF participants were also provided with a comprehensive exercise manual including sample exercise sessions aimed at improving musculoskeletal strength and flexibility which included exercise prescription (i.e. sets, reps, duration, load). Changes to upper-body (push-ups), lower-body (wall squat) and core (plank hold) strength and flexibility (back scratch and sit-reach tests) after the 6-month intervention were analysed using repeated measures ANOVA to compare changes between groups and over time. Upper-body (+20.6%; p < 0.01; partial eta squared = 0.34 [large effect]) and lower-body (+40.8%; p < 0.05; partial eta squared = 0.08 (moderate effect)) strength increased significantly with no interaction or group effects. Changes to core strength (+1.4%; p=0.17) and both upper-body (+19.5%; p=0.56) and lower-body (+3.3%; p=0.15) flexibility were non-significant with no interaction or group effects observed. While upper- and lower-body strength improved over the course of the intervention, providing a 6-month workplace exercise program with or without health coach support did not confer any greater strength or flexibility benefits than exercise testing alone (CON). Although exercise adherence was not measured, it is possible that participants require additional methods of support such as face-to-face exercise instruction and guidance and individually-tailored exercise programs to achieve adequate participation and improvements in musculoskeletal fitness. This presents challenges for more remote paramedic stations without regular face-to-face access to suitably qualified exercise professionals, and future research should investigate the effectiveness of other forms of exercise delivery and guidance for these paramedic officers such as remotely-facilitated digital exercise prescription and monitoring.

Keywords: workplace exercise, paramedic health, strength training, flexibility training

Procedia PDF Downloads 142
246 Hydroxyapatite Nanorods as Novel Fillers for Improving the Properties of PBSu

Authors: M. Nerantzaki, I. Koliakou, D. Bikiaris

Abstract:

This study evaluates the hypothesis that the incorporation of fibrous hydroxyapatite nanoparticles (nHA) with high crystallinity and high aspect ratio, synthesized by hydrothermal method, into Poly(butylene succinate) (PBSu), improves the bioactivity of the aliphatic polyester and affects new bone growth inhibiting resorption and enhancing bone formation. Hydroxyapatite nanorods were synthesized using a simple hydrothermal procedure. First, the HPO42- -containing solution was added drop-wise into the Ca2+-containing solution, while the molar ratio of Ca/P was adjusted at 1.67. The HA precursor was then treated hydrothermally at 200°C for 72 h. The resulting powder was characterized using XRD, FT-IR, TEM, and EDXA. Afterwards, PBSu nanocomposites containing 2.5wt% (nHA) were prepared by in situ polymerization technique for the first time and were examined as potential scaffolds for bone engineering applications. For comparison purposes composites containing either 2.5wt% micro-Bioglass (mBG) or 2.5wt% mBG-nHA were prepared and studied, too. The composite scaffolds were characterized using SEM, FTIR, and XRD. Mechanical testing (Instron 3344) and Contact Angle measurements were also carried out. Enzymatic degradation was studied in an aqueous solution containing a mixture of R. Oryzae and P. Cepacia lipases at 37°C and pH=7.2. In vitro biomineralization test was performed by immersing all samples in simulated body fluid (SBF) for 21 days. Biocompatibility was assessed using rat Adipose Stem Cells (rASCs), genetically modified by nucleofection with DNA encoding SB100x transposase and pT2-Venus-neo transposon expression plasmids in order to attain fluorescence images. Cell proliferation and viability of cells on the scaffolds were evaluated using fluoresce microscopy and MTT (3-(4,5-dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide) assay. Finally, osteogenic differentiation was assessed by staining rASCs with alizarine red using cetylpyridinium chloride (CPC) method. TEM image of the fibrous HAp nanoparticles, synthesized in the present study clearly showed the fibrous morphology of the synthesized powder. The addition of nHA decreased significantly the contact angle of the samples, indicating that the materials become more hydrophilic and hence they absorb more water and subsequently degrade more rapidly. In vitro biomineralization test confirmed that all samples were bioactive as mineral deposits were detected by X-ray diffractometry after incubation in SBF. Metabolic activity of rASCs on all PBSu composites was high and increased from day 1 of culture to day 14. On day 28 metabolic activity of rASCs cultured on samples enriched with bioceramics was significantly decreased due to possible differentiation of rASCs to osteoblasts. Staining rASCs with alizarin red after 28 days in culture confirmed our initial hypothesis as the presence of calcium was detected, suggesting osteogenic differentiation of rACS on PBSu/nHAp/mBG 2.5% and PBSu/mBG 2.5% composite scaffolds.

Keywords: biomaterials, hydroxyapatite nanorods, poly(butylene succinate), scaffolds

Procedia PDF Downloads 310
245 Detection the Ice Formation Processes Using Multiple High Order Ultrasonic Guided Wave Modes

Authors: Regina Rekuviene, Vykintas Samaitis, Liudas Mažeika, Audrius Jankauskas, Virginija Jankauskaitė, Laura Gegeckienė, Abdolali Sadaghiani, Shaghayegh Saeidiharzand

Abstract:

Icing brings significant damage to aviation and renewable energy installations. Air-conditioning, refrigeration, wind turbine blades, airplane and helicopter blades often suffer from icing phenomena, which cause severe energy losses and impair aerodynamic performance. The icing process is a complex phenomenon with many different causes and types. Icing mechanisms, distributions, and patterns are still relevant to research topics. The adhesion strength between ice and surfaces differs in different icing environments. This makes the task of anti-icing very challenging. The techniques for various icing environments must satisfy different demands and requirements (e.g., efficient, lightweight, low power consumption, low maintenance and manufacturing costs, reliable operation). It is noticeable that most methods are oriented toward a particular sector and adapting them to or suggesting them for other areas is quite problematic. These methods often use various technologies and have different specifications, sometimes with no clear indication of their efficiency. There are two major groups of anti-icing methods: passive and active. Active techniques have high efficiency but, at the same time, quite high energy consumption and require intervention in the structure’s design. It’s noticeable that vast majority of these methods require specific knowledge and personnel skills. The main effect of passive methods (ice-phobic, superhydrophobic surfaces) is to delay ice formation and growth or reduce the adhesion strength between the ice and the surface. These methods are time-consuming and depend on forecasting. They can be applied on small surfaces only for specific targets, and most are non-biodegradable (except for anti-freezing proteins). There is some quite promising information on ultrasonic ice mitigation methods that employ UGW (Ultrasonic Guided Wave). These methods are have the characteristics of low energy consumption, low cost, lightweight, and easy replacement and maintenance. However, fundamental knowledge of ultrasonic de-icing methodology is still limited. The objective of this work was to identify the ice formation processes and its progress by employing ultrasonic guided wave technique. Throughout this research, the universal set-up for acoustic measurement of ice formation in a real condition (temperature range from +240 C to -230 C) was developed. Ultrasonic measurements were performed by using high frequency 5 MHz transducers in a pitch-catch configuration. The selection of wave modes suitable for detection of ice formation phenomenon on copper metal surface was performed. Interaction between the selected wave modes and ice formation processes was investigated. It was found that selected wave modes are sensitive to temperature changes. It was demonstrated that proposed ultrasonic technique could be successfully used for the detection of ice layer formation on a metal surface.

Keywords: ice formation processes, ultrasonic GW, detection of ice formation, ultrasonic testing

Procedia PDF Downloads 65
244 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading

Authors: Robert Caulk

Abstract:

A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.

Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration

Procedia PDF Downloads 92
243 Innovation and Entrepreneurship in the South of China

Authors: Federica Marangio

Abstract:

This study looks at the triangle of knowledge: research-education-innovation as growth engine of an inclusive and sustainable society, where the research is the strategic process which allows the acquisition of knowledge, innovation appraises the knowledge acquired and the education is the enabling factor of the human capital to create entrepreneurial capital. Where does Italy and China stand in the global geography of innovation? Europe is calling on a smart, inclusive and sustainable growth through a specializing process that looks at the social and economic challenges, able to understand the characteristics of specific geographic areas. It is easily questionable why it is not as simple as it looks to come up with entrepreneurial ideas in all the geographic areas. Seen that the technology plus the human capital should be the means through which is possible to innovate and contribute to the boost of innovation culture, then the young educated people can be seen as the society changing agents and it becomes clear the importance of investigating the skills and competencies that lead to innovation. By starting innovation-based activities, other countries on an international level, are able now to be part of an healthy innovative ecosystem which is the result of a strong growth policy which enables innovation. Analyzing the geography of the innovation on a global scale, comes to light that the innovative entrepreneurship is the process which portrays the competitiveness of the regions in the knowledge-based economy as strategic process able to match intellectual capital and market opportunities. The level of innovative entrepreneurship is not only the result of the endogenous growth ability of the enterprises, but also by significant relations with other enterprises, universities, other centers of education and institutions. To obtain more innovative entrepreneurship is necessary to stimulate more synergy between all these territory actors in order to create, access and value existing and new knowledge ready to be disseminate. This study focuses on individual’s lived experience and the researcher believed that she can’t understand the human actions without understanding the meaning that they attribute to their thoughts, feelings, beliefs and so given she needed to understand the deeper perspectives captured through face-to face interaction. A case study approach will contribute to the betterment of knowledge in this field. This case study will represent a picture of the innovative ecosystem and the entrepreneurial mindset as a key ingredient of endogenous growth and a must for sustainable local and regional development and social cohesion. The case study will be realized analyzing two Chinese companies. A structured set of questions will be asked in order to gain details on what generated success or failure in the different situations with the past and at the moment of the research. Everything will be recorded not to lose important information during the transcription phase. While this work is not geared toward testing a priori hypotheses, it is nevertheless useful to examine whether the projects undertaken by the companies, were stimulated by enabling factors that, as result, enhanced or hampered the local innovation culture.

Keywords: Entrepreneurship, education, geography of innovation, education.

Procedia PDF Downloads 420
242 Health and Performance Fitness Assessment of Adolescents in Middle Income Schools in Lagos State

Authors: Onabajo Paul

Abstract:

The testing and assessment of physical fitness of school-aged adolescents in Nigeria has been going on for several decades. Originally, these tests strictly focused on identifying health and physical fitness status and comparing the results of adolescents with others. There is a considerable interest in health and performance fitness of adolescents in which results attained are compared with criteria representing positive health rather than simply on score comparisons with others. Despite the fact that physical education program is being studied in secondary schools and physical activities are encouraged, it is observed that regular assessment of students’ fitness level and health status seems to be scarce or not being done in these schools. The purpose of the study was to assess the heath and performance fitness of adolescents in middle-income schools in Lagos State. A total number of 150 students were selected using the simple random sampling technique. Participants were measured on hand grip strength, sit-up, pacer 20 meter shuttle run, standing long jump, weight and height. The data collected were analyzed with descriptive statistics of means, standard deviations, and range and compared with fitness norms. It was concluded that majority 111(74.0%) of the adolescents achieved the healthy fitness zone, 33(22.0%) were very lean, and 6(4.0%) needed improvement according to the normative standard of Body Mass Index test. For muscular strength, majority 78(52.0%) were weak, 66(44.0%) were normal, and 6(4.0%) were strong according to the normative standard of hand-grip strength test. For aerobic capacity fitness, majority 93(62.0%) needed improvement and were at health risk, 36(24.0%) achieved healthy fitness zone, and 21(14.0%) needed improvement according to the normative standard of PACER test. Majority 48(32.0%) of the participants had good hip flexibility, 38(25.3%) had fair status, 27(18.0%) needed improvement, 24(16.0%) had very good hip flexibility status, and 13(8.7%) of the participants had excellent status. Majority 61(40.7%) had average muscular endurance status, 30(20.0%) had poor status, 29(18.3%) had good status, 28(18.7%) had fair muscular endurance status, and 2(1.3%) of the participants had excellent status according to the normative standard of sit-up test. Majority 52(34.7%) had low jump ability fitness, 47(31.3%) had marginal fitness, 31(20.7%) had good fitness, and 20(13.3%) had high performance fitness according to the normative standard of standing long jump test. Based on the findings, it was concluded that majority of the adolescents had better Body Mass Index status, and performed well in both hip flexibility and muscular endurance tests. Whereas majority of the adolescents performed poorly in aerobic capacity test, muscular strength and jump ability test. It was recommended that to enhance wellness, adolescents should be involved in physical activities and recreation lasting 30 minutes three times a week. Schools should engage in fitness program for students on regular basis at both senior and junior classes so as to develop good cardio-respiratory, muscular fitness and improve overall health of the students.

Keywords: adolescents, health-related fitness, performance-related fitness, physical fitness

Procedia PDF Downloads 356
241 Arc Plasma Thermochemical Preparation of Coal to Effective Combustion in Thermal Power Plants

Authors: Vladimir Messerle, Alexandr Ustimenko, Oleg Lavrichshev

Abstract:

This work presents plasma technology for solid fuel ignition and combustion. Plasma activation promotes more effective and environmentally friendly low-rank coal ignition and combustion. To realise this technology at coal fired power plants plasma-fuel systems (PFS) were developed. PFS improve efficiency of power coals combustion and decrease harmful emission. PFS is pulverized coal burner equipped with arc plasma torch. Plasma torch is the main element of the PFS. Plasma forming gas is air. It is blown through the electrodes forming plasma flame. Temperature of this flame is varied from 5000 to 6000 K. Plasma torch power is varied from 100 to 350 kW and geometrical sizes are the following: the height is 0.4-0.5 m and diameter is 0.2-0.25 m. The base of the PFS technology is plasma thermochemical preparation of coal for burning. It consists of heating of the pulverized coal and air mixture by arc plasma up to temperature of coal volatiles release and char carbon partial gasification. In the PFS coal-air mixture is deficient in oxygen and carbon is oxidised mainly to carbon monoxide. As a result, at the PFS exit a highly reactive mixture is formed of combustible gases and partially burned char particles, together with products of combustion, while the temperature of the gaseous mixture is around 1300 K. Further mixing with the air promotes intensive ignition and complete combustion of the prepared fuel. PFS have been tested for boilers start up and pulverized coal flame stabilization in different countries at power boilers of 75 to 950 t/h steam productivity. They were equipped with different types of pulverized coal burners (direct flow, muffle and swirl burners). At PFS testing power coals of all ranks (lignite, bituminous, anthracite and their mixtures) were incinerated. Volatile content of them was from 4 to 50%, ash varied from 15 to 48% and heat of combustion was from 1600 to 6000 kcal/kg. To show the advantages of the plasma technology before conventional technologies of coal combustion numerical investigation of plasma ignition, gasification and thermochemical preparation of a pulverized coal for incineration in an experimental furnace with heat capacity of 3 MW was fulfilled. Two computer-codes were used for the research. The computer simulation experiments were conducted for low-rank bituminous coal of 44% ash content. The boiler operation has been studied at the conventional mode of combustion and with arc plasma activation of coal combustion. The experiments and computer simulation showed ecological efficiency of the plasma technology. When a plasma torch operates in the regime of plasma stabilization of pulverized coal flame, NOX emission is reduced twice and amount of unburned carbon is reduced four times. Acknowledgement: This work was supported by Ministry of Education and Science of the Republic of Kazakhstan and Ministry of Education and Science of the Russian Federation (Agreement on grant No. 14.613.21.0005, project RFMEFI61314X0005).

Keywords: coal, ignition, plasma-fuel system, plasma torch, thermal power plant

Procedia PDF Downloads 279
240 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 416
239 Music Genre Classification Based on Non-Negative Matrix Factorization Features

Authors: Soyon Kim, Edward Kim

Abstract:

In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.

Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)

Procedia PDF Downloads 303
238 Challenging Role of Talent Management, Career Development and Compensation Management toward Employee Retention and Organizational Performance with Mediating Effect of Employee Motivation in Service Sector of Pakistan

Authors: Muhammad Younas, Sidra Sawati, M. Razzaq Athar

Abstract:

Organizational development history reveals that it has ever been a challenge to identify and fathom the role of talent management, career development and compensation management towards employees’ retention and organizational performance. Organizations strive hard to measure the impact of all those factors which affect employee retention and organizational performance. Researchers have worked in great deal in order to know the relationship of independent variables i.e. Talent Management, Career Development and Compensation Management on dependent variables i.e. Employee Retention and Organizational Performance. Employees adorned with latest skills with long lasting loyalty play a significant role towards successful achievement of short term as well as long term goals of the organizations. Retention of valuable and resourceful employees for a longer time is equally essential for meeting the set goals. The organizations which spend reasonable chunk of their resources for taking such measures that help to retain their employees through talent management and satisfactory career development always enjoy a competitive edge over their competitors. Human resource is regarded as one of the most precious and difficult resource to management. It has its own needs and requirement. It becomes an easy prey to monotony when lacks career development. Wants and aspirations of this resource are seldom met completely but can be managed through career development and compensation management. In this era of competition, organizations have to take viable steps to management their resources especially human resource. Top management and Managers keep on working for an amenable solution in order to address the challenges relating career development and compensation management as their ultimate goal is to ensure the organizational performance on optimum level. The current study was conducted to examine the impact of Talent Management, Career Development and Compensation Management towards Employees Retention and Organizational Performance with mediating effect of Employees Motivation in Service Sector of Pakistan. The current study is based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) theories. It explains that by increasing internal resources we can manage employee talent, career development through compensation management and employee motivation more effectively. It will result in effective execution of HRM practices for employee retention enabling an organization to achieve and sustain competitive advantage through optimal performance. Data collection was made through a structured questionnaire which was based upon adopted instruments after testing reliability and validity. A total 300 employees of 30 firms in service sector of Pakistan were sampled through non-probability sampling technique. Regression analysis revealed that talent management, career development and compensation management have significant positive impact on employee retention and perceived organizational performance. The results further showed that employee motivation have a significant mediating effect on employee retention and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are also discussed.

Keywords: career development, compensation management, employee retention, organizational performance, talent management

Procedia PDF Downloads 321
237 Teachers' and Learners' Experiences of Learners' Writing in English First Additional Language

Authors: Jane-Francis A. Abongdia, Thandiswa Mpiti

Abstract:

There is an international concern to develop children’s literacy skills. In many parts of the world, the need to become fluent in a second language is essential for gaining meaningful access to education, the labour market and broader social functioning. In spite of these efforts, the problem still continues. The level of English language proficiency is far from satisfactory and these goals are unattainable by others. The issue is more complex in South Africa as learners are immersed in a second language (L2) curriculum. South Africa is a prime example of a country facing the dilemma of how to effectively equip a majority of its population with English as a second language or first additional language (FAL). Given the multilingual nature of South Africa with eleven official languages, and the position and power of English, the study investigates teachers’ and learners’ experiences on isiXhosa and Afrikaans background learners’ writing in English First Additional Language (EFAL). Moreover, possible causes of writing difficulties and teacher’s practices for writing are explored. The theoretical and conceptual framework for the study is provided by studies on constructivist theories and sociocultural theories. In exploring these issues, a qualitative approach through semi-structured interviews, classroom observations, and document analysis were adopted. This data is analysed by critical discourse analysis (CDA). The study identified a weak correlation between teachers’ beliefs and their actual teaching practices. Although the teachers believe that writing is as important as listening, speaking, reading, grammar and vocabulary, and that it needs regular practice, the data reveal that they fail to put their beliefs into practice. Moreover, the data revealed that learners were disturbed by their home language because when they do not know a word they would write either the isiXhosa or the Afrikaans equivalent. Code-switching seems to have instilled a sense of “dependence on translations” where some learners would not even try to answer English questions but would wait for the teacher to translate the questions into isiXhosa or Afrikaans before they could attempt to give answers. The findings of the study show a marked improvement in the writing performance of learners who used the process approach in writing. These findings demonstrate the need for assisting teachers to shift away from focusing only on learners’ performance (testing and grading) towards a stronger emphasis on the process of writing. The study concludes that the process approach to writing could enable teachers to focus on the various parts of the writing process which can give more freedom to learners to experiment their language proficiency. It would require that teachers develop a deeper understanding of the process/genre approaches to teaching writing advocated by CAPS. All in all, the study shows that both learners and teachers face numerous challenges relating to writing. This means that more work still needs to be done in this area. The present study argues that teachers teaching EFAL learners should approach writing as a critical and core aspect of learners’ education. Learners should be exposed to intensive writing activities throughout their school years.

Keywords: constructivism, English second language, language of learning and teaching, writing

Procedia PDF Downloads 219
236 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction

Authors: Ahlad Neti, Elisa Arch, Martha Hall

Abstract:

Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.

Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices

Procedia PDF Downloads 159
235 Polyurethane Membrane Mechanical Property Study for a Novel Carotid Covered Stent

Authors: Keping Zuo, Jia Yin Chia, Gideon Praveen Kumar Vijayakumar, Foad Kabinejadian, Fangsen Cui, Pei Ho, Hwa Liang Leo

Abstract:

Carotid artery is the major vessel supplying blood to the brain. Carotid artery stenosis is one of the three major causes of stroke and the stroke is the fourth leading cause of death and the first leading cause of disability in most developed countries. Although there is an increasing interest in carotid artery stenting for treatment of cervical carotid artery bifurcation therosclerotic disease, currently available bare metal stents cannot provide an adequate protection against the detachment of the plaque fragments over diseased carotid artery, which could result in the formation of micro-emboli and subsequent stroke. Our research group has recently developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet retaining the ability to preserve the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid therosclerotic stenosis. The purpose of this study is to evaluate the biomechanical property of PU membrane of different concentration configurations in order to refine the stent coating technique and enhance the clinical performance of our novel carotid covered stent. Results from this study also provide necessary material property information crucial for accurate simulation analysis for our stents. Method: Medical grade Polyurethane (ChronoFlex AR) was used to prepare PU membrane specimens. Different PU membrane configurations were subjected to uniaxial test: 22%, 16%, and 11% PU solution were made by mixing the original solution with proper amount of the Dimethylacetamide (DMAC). The specimens were then immersed in physiological saline solution for 24 hours before test. All specimens were moistened with saline solution before mounting and subsequent uniaxial testing. The specimens were preconditioned by loading the PU membrane sample to a peak stress of 5.5 Mpa for 10 consecutive cycles at a rate of 50 mm/min. The specimens were then stretched to failure at the same loading rate. Result: The results showed that the stress-strain response curves of all PU membrane samples exhibited nonlinear characteristic. For the ultimate failure stress, 22% PU membrane was significantly higher than 16% (p<0.05). In general, our preliminary results showed that lower concentration PU membrane is stiffer than the higher concentration one. From the perspective of mechanical properties, 22% PU membrane is a better choice for the covered stent. Interestingly, the hyperelastic Ogden model is able to accurately capture the nonlinear, isotropic stress-strain behavior of PU membrane with R2 of 0.9977 ± 0.00172. This result will be useful for future biomechanical analysis of our stent designs and will play an important role for computational modeling of our covered stent fatigue study.

Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain

Procedia PDF Downloads 312
234 Comparison of Non-destructive Devices to Quantify the Moisture Content of Bio-Based Insulation Materials on Construction Sites

Authors: Léa Caban, Lucile Soudani, Julien Berger, Armelle Nouviaire, Emilio Bastidas-Arteaga

Abstract:

Improvement of the thermal performance of buildings is a high concern for the construction industry. With the increase in environmental issues, new types of construction materials are being developed. These include bio-based insulation materials. They capture carbon dioxide, can be produced locally, and have good thermal performance. However, their behavior with respect to moisture transfer is still facing some issues. With a high porosity, the mass transfer is more important in those materials than in mineral insulation ones. Therefore, they can be more sensitive to moisture disorders such as mold growth, condensation risks or decrease of the wall energy efficiency. For this reason, the initial moisture content on the construction site is a piece of crucial knowledge. Measuring moisture content in a laboratory is a mastered task. Diverse methods exist but the easiest and the reference one is gravimetric. A material is weighed dry and wet, and its moisture content is mathematically deduced. Non-destructive methods (NDT) are promising tools to determine in an easy and fast way the moisture content in a laboratory or on construction sites. However, the quality and reliability of the measures are influenced by several factors. Classical NDT portable devices usable on-site measure the capacity or the resistivity of materials. Water’s electrical properties are very different from those of construction materials, which is why the water content can be deduced from these measurements. However, most moisture meters are made to measure wooden materials, and some of them can be adapted for construction materials with calibration curves. Anyway, these devices are almost never calibrated for insulation materials. The main objective of this study is to determine the reliability of moisture meters in the measurement of biobased insulation materials. The determination of which one of the capacitive or resistive methods is the most accurate and which device gives the best result is made. Several biobased insulation materials are tested. Recycled cotton, two types of wood fibers of different densities (53 and 158 kg/m3) and a mix of linen, cotton, and hemp. It seems important to assess the behavior of a mineral material, so glass wool is also measured. An experimental campaign is performed in a laboratory. A gravimetric measurement of the materials is carried out for every level of moisture content. These levels are set using a climatic chamber and by setting the relative humidity level for a constant temperature. The mass-based moisture contents measured are considered as references values, and the results given by moisture meters are compared to them. A complete analysis of the uncertainty measurement is also done. These results are used to analyze the reliability of moisture meters depending on the materials and their water content. This makes it possible to determine whether the moisture meters are reliable, and which one is the most accurate. It will then be used for future measurements on construction sites to assess the initial hygrothermal state of insulation materials, on both new-build and renovation projects.

Keywords: capacitance method, electrical resistance method, insulation materials, moisture transfer, non-destructive testing

Procedia PDF Downloads 128
233 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 74
232 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker

Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán

Abstract:

The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.

Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation

Procedia PDF Downloads 29
231 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 92
230 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 168
229 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications

Authors: Marina Shargorodsky

Abstract:

Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.

Keywords: obesity, pregnancy, complications, weight gain

Procedia PDF Downloads 55
228 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 108
227 The Effect of Degraded Shock Absorbers on the Safety-Critical Tipping and Rolling Behaviour of Passenger Cars

Authors: Tobias Schramm, Günther Prokop

Abstract:

In Germany, the number of road fatalities has been falling since 2010 at a more moderate rate than before. At the same time, the average age of all registered passenger cars in Germany is rising continuously. Studies show that there is a correlation between the age and mileage of passenger cars and the degradation of their chassis components. Various studies show that degraded shock absorbers increase the braking distance of passenger cars and have a negative impact on driving stability. The exact effect of degraded vehicle shock absorbers on road safety is still the subject of research. A shock absorber examination as part of the periodic technical inspection is only mandatory in very few countries. In Germany, there is as yet no requirement for such a shock absorber examination. More comprehensive findings on the effect of degraded shock absorbers on the safety-critical driving dynamics of passenger cars can provide further arguments for the introduction of mandatory shock absorber testing as part of the periodic technical inspection. The specific effect chains of untripped rollover accidents are also still the subject of research. However, current research results show that the high proportion of sport utility vehicles in the vehicle field significantly increases the probability of untripped rollover accidents. The aim of this work is to estimate the effect of degraded twin-tube shock absorbers on the safety-critical tipping and rolling behaviour of passenger cars, which can lead to untripped rollover accidents. A characteristic curve-based five-mass full vehicle model and a semi-physical phenomenological shock absorber model were set up, parameterized and validated. The shock absorber model is able to reproduce the damping characteristics of vehicle twin-tube shock absorbers with oil and gas loss for various excitations. The full vehicle model was validated with steering wheel angle sinus sweep driving maneuvers. The model was then used to simulate steering wheel angle sine and fishhook maneuvers, which investigate the safety-critical tipping and rolling behavior of passenger cars. The simulations were carried out in a realistic parameter space in order to demonstrate the effect of various vehicle characteristics on the effect of degraded shock absorbers. As a result, it was shown that degraded shock absorbers have a negative effect on the tipping and rolling behavior of all passenger cars. Shock absorber degradation leads to a significant increase in the observed roll angles, particularly in the range of the roll natural frequency. This superelevation has a negative effect on the wheel load distribution during the driving maneuvers investigated. In particular, the height of the vehicle's center of gravity and the stabilizer stiffness of the vehicles has a major influence on the effect of degraded shock absorbers on the overturning and rolling behaviour of passenger cars.

Keywords: numerical simulation, safety-critical driving dynamics, suspension degradation, tipping and rolling behavior of passenger cars, vehicle shock absorber

Procedia PDF Downloads 21