Search results for: engineering materials and applications
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13965

Search results for: engineering materials and applications

3225 Positive Effects of Aerobic Exercise after Bone Marrow Stem Cell Transplantation on Recovery of Dopaminergic Neurons and Promotion of Angiogenesis Markers in the Striatum of Parkinsonian Rats

Authors: S. A. Hashemvarzi, A. Heidarianpour, Z. Fallahmohammadi, M. Pourghasem, M. Kaviani

Abstract:

Introduction: Parkinson’s disease (PD) is a progressive neurodegenerative in the central nervous system characterized by the loss of dopaminergic neurons in the substantia nigra resulting in loss of dopamine release in the striatum. Non-drug treatment options such as Stem cell transplantation and exercise have been considered for treatment of Parkinson's disease. Purpose: The purpose of this study was to evaluate the effect of aerobic exercise after bone marrow stem cells transplantation on recovery of dopaminergic neurons and promotion of angiogenesis markers in the striatum of parkinsonian rats. Materials and Methods: 42 male Wistar rats were divided randomly into six groups: Normal (N), Sham (S), Parkinson’s (P), Stem cells transplanted Parkinson’s (SP), Exercised Parkinson’s (EP) and Stem cells transplanted + Exercised Parkinson’s (SEP). To create a model of Parkinson's, the striatum was destroyed by injection of 6-hydroxy-dopamine into the striatum through stereotaxic apparatus. Stem cells were derived from the bone marrow of femur and tibia of male rats with 6-8 weeks old. After cultivation, approximately 5×105 cells in 5 microliter of medium were injected into the striatum of rats through the channel. Aerobic exercise was included 8 weeks of running on the treadmill with a speed of 15 meters per minute. At the end, all subjects were decapitated and striatum tissues were separately isolated for measurement of vascular endothelial growth factor (VEGF), dopamine (DA) and tyrosine hydroxylase (TH) levels. Results: VEGF, DA and TH levels in the striatum of parkinsonian rats significantly increased in treatment groups (SP, EP and SEP), especially in SEP group compared to P group after treatment (P<0.05). Conclusion: The findings implicate that the BMSCs transplantation in combination with exercise would have synergistic effects leading to functional recovery, dopaminergic neurons recovery and promotion of angiogenesis marker in the striatum of parkinsonian rats.

Keywords: stem cells, treadmill training, neurotrophic factors, Parkinson

Procedia PDF Downloads 337
3224 Hematological Malignancies in Children and Parental Occupational Exposure

Authors: H. Kalboussi, A. Aloui, W. Boughattas, M. Maoua, A. Brahem, S. Chatti, O. El Maalel, F. Debbabi, N. Mrizak, Y. Ben Youssef, A. Khlif, I. Bougmiza

Abstract:

Background: In recent decades, the incidence of children's hematological malignancies has been increasing worldwide including Tunisia. Their severity is reflected in the importance of the medical, social and economic impact. This increase remains fully unexplained, and the involvement of genetic, environmental and occupational factors is strongly suspected. Materials and Methods: Our study is a cross-sectional survey of the type case-control conducted in the University Hospital of Farhat Hached of Sousse during the period ranging between 1 July 2011 and 30 June 2012,and which included children with acute leukemia compared to children unharmed by neoplastic disease . Cases and controls were matched by age and gender. Our objective was to: - Describe the socio-occupational characteristics of the parents of children with acute leukemia. - Identify potential occupational factors implicated in the genesis of acute leukemia. Result: The number of acute leukemia cases in the Hematology Service and day hospital of the University Hospital of Farhat Hached during the study period was 66 cases divided into in 40 boys and 26 girls with a sex ratio of 1.53. Our cases and controls were matched by age and gender. The risk of incidence of leukemia in children from smoking fathers was higher (p = 0.02, OR = 2.24, IC = [1.11 - 4.52]). The risk of incidence of leukemia in children from alcoholic fathers was higher with p = 0,009, OR = 3.9; CI = [1.33 - 11.39]. After adjusting different variables, the difference persisted significantly with pa = 0.03 and ORa = 3.5; ICa = [1.09 -11.6]. 25.7 % of cases had a family history of blood disease and neoplasia, whereas no control presented that. The difference was statistically significant (p = 0.006), OR = 1.46, IC = [1.38 - 1.56]. The parental occupational exposures associated to the occurrence of acute leukemia in children were: - Pesticides with a statistically significant difference (p = 0.03), OR = 2.94, IC = [1.06 - 8.13]. This difference persisted after adjustment with different variables pa = 0.01, ORa 3.75; ICa = [1.27 - 11.03]. - Cement without a statistically non-significant difference (p = 0.2). This difference has become significant after adjustment with the different variables pa = 0.03; ORa = 2.67; ICa = [1.06 - 6.7]. Conclusion: Parental exposure to occupational risk factors may play a role in the pathogenesis of acute leukemia in children.

Keywords: hematological malignancies, children, parents, occupational exposure

Procedia PDF Downloads 313
3223 The Role of Hypothalamus Mediators in Energy Imbalance

Authors: Maftunakhon Latipova, Feruza Khaydarova

Abstract:

Obesity is considered a chronic metabolic disease that occurs at any age. Regulation of body weight in the body is carried out through complex interaction of a complex of interrelated systems that control the body's energy system. Energy imbalance is the cause of obesity and overweight, in which the supply of energy from food exceeds the energy needs of the body. Obesity is closely related to impaired appetite regulation, and a hypothalamus is a key place for neural regulation of food consumption. The nucleus of the hypothalamus is connected and interdependent on receiving, integrating and sending hunger signals to regulate appetite. Purpose of the study: to identify markers of food behavior. Materials and methods: The screening was carried out to identify eating disorders in 200 men and women aged 18 to 35 years with overweight and obesity and to check the effects of Orexin A and Neuropeptide Y markers. A questionnaire and questionnaires were conducted with over 200 people aged 18 to 35 years. Questionnaires were for eating disorders and hidden depression (on the Zang scale). Anthropometry is measured by OT, OB, BMI, Weight, and Height. Based on the results of the collected data, 3 groups were divided: People with obesity, People with overweight, Control Group of Healthy People. Results: Of the 200 analysed persons, 86% had eating disorders. Of these, 60% of eating disorders were associated with childhood. According to the Zang test result: Normal condition was about 37%, mild depressive disorder 20%, moderate depressive disorder 25% and 18% of people suffered from severe depressive disorder without knowing it. One group of people with obesity had eating disorders and moderate and severe depressive disorder, and group 2 was overweight with mild depressive disorder. According to laboratory data, the first group had the lowest concentration of Orexin A and Neuropeptide U in blood serum. Conclusions: Being overweight and obese are the first signal of many diseases, and prevention and detection of these disorders will prevent various diseases, including type 2 diabetes. Obesity etiology is associated with eating disorders and signal transmission of the orexinorghetic system of the hypothalamus.

Keywords: obesity, endocrinology, hypothalamus, overweight

Procedia PDF Downloads 66
3222 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 165
3221 Computation of Radiotherapy Treatment Plans Based on CT to ED Conversion Curves

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Radiotherapy treatment planning computers use CT data of the patient. For the computation of a treatment plan, treatment planning system must have an information on electron densities of tissues scanned by CT. This information is given by the conversion curve CT (CT number) to ED (electron density), or simply calibration curve. Every treatment planning system (TPS) has built in default CT to ED conversion curves, for the CTs of different manufacturers. However, it is always recommended to verify the CT to ED conversion curve before actual clinical use. Objective of this study was to check how the default curve already provided matches the curve actually measured on a specific CT, and how much it influences the calculation of a treatment planning computer. The examined CT scanners were from the same manufacturer, but four different scanners from three generations. The measurements of all calibration curves were done with the dedicated phantom CIRS 062M Electron Density Phantom. The phantom was scanned, and according to real HU values read at the CT console computer, CT to ED conversion curves were generated for different materials, for same tube voltage 140 kV. Another phantom, CIRS Thorax 002 LFC which represents an average human torso in proportion, density and two-dimensional structure, was used for verification. The treatment planning was done on CT slices of scanned CIRS LFC 002 phantom, for selected cases. Interest points were set in the lungs, and in the spinal cord, and doses recorded in TPS. The overall calculated treatment times for four scanners and default scanner did not differ more than 0.8%. Overall interest point dose in bone differed max 0.6% while for single fields was maximum 2.7% (lateral field). Overall interest point dose in lungs differed max 1.1% while for single fields was maximum 2.6% (lateral field). It is known that user should verify the CT to ED conversion curve, but often, developing countries are facing lack of QA equipment, and often use default data provided. We have concluded that the CT to ED curves obtained differ in certain points of a curve, generally in the region of higher densities. This influences the treatment planning result which is not significant, but definitely does make difference in the calculated dose.

Keywords: Computation of treatment plan, conversion curve, radiotherapy, electron density

Procedia PDF Downloads 478
3220 Ethical, Legal and Societal Aspects of Unmanned Aircraft in Defence

Authors: Henning Lahmann, Benjamyn I. Scott, Bart Custers

Abstract:

Suboptimal adoption of AI in defence organisations carries risks for the protection of the freedom, safety, and security of society. Despite the vast opportunities that defence AI-technology presents, there are also a variety of ethical, legal, and societal concerns. To ensure the successful use of AI technology by the military, ethical, legal, and societal aspects (ELSA) need to be considered, and their concerns continuously addressed at all levels. This includes ELSA considerations during the design, manufacturing and maintenance of AI-based systems, as well as its utilisation via appropriate military doctrine and training. This raises the question how defence organisations can remain strategically competitive and at the edge of military innovation, while respecting the values of its citizens. This paper will explain the set-up and share preliminary results of a 4-year research project commissioned by the National Research Council in the Netherlands on the ethical, legal, and societal aspects of AI in defence. The project plans to develop a future-proof, independent, and consultative ecosystem for the responsible use of AI in the defence domain. In order to achieve this, the lab shall devise a context-dependent methodology that focuses on the ‘analysis’, ‘design’ and ‘evaluation’ of ELSA of AI-based applications within the military context, which include inter alia unmanned aircraft. This is bolstered as the Lab also recognises and complements the existing methods in regards to human-machine teaming, explainable algorithms, and value-sensitive design. Such methods will be modified for the military context and applied to pertinent case-studies. These case-studies include, among others, the application of autonomous robots (incl. semi- autonomous) and AI-based methods against cognitive warfare. As the perception of the application of AI in the military context, by both society and defence personnel, is important, the Lab will study how these perceptions evolve and vary in different contexts. Furthermore, the Lab will monitor – as they may influence people’s perception – developments in the global technological, military and societal spheres. Although the emphasis of the research project is on different forms of AI in defence, it focuses on several case studies. One of these case studies is on unmanned aircraft, which will also be the focus of the paper. Hence, ethical, legal, and societal aspects of unmanned aircraft in the defence domain will be discussed in detail, including but not limited to privacy issues. Typical other issues concern security (for people, objects, data or other aircraft), privacy (sensitive data, hindrance, annoyance, data collection, function creep), chilling effects, PlayStation mentality, and PTSD.

Keywords: autonomous weapon systems, unmanned aircraft, human-machine teaming, meaningful human control, value-sensitive design

Procedia PDF Downloads 87
3219 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste

Authors: Timilehin Martins Oyinloye, Won Byong Yoon

Abstract:

Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.

Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste

Procedia PDF Downloads 60
3218 Enhanced Methane Yield from Organic Fraction of Municipal Solid Waste with Coconut Biochar as Syntrophic Metabolism Biostimulant

Authors: Maria Altamirano, Alfonso Duran

Abstract:

Biostimulation has recently become important in order to improve the stability and performance of the anaerobic digestion (AD) process. This strategy involves the addition of nutrients or supplements to improve the rate of degradation of a native microbial consortium. With the aim of biostimulate sytrophism between secondary fermenting bacteria and methanogenic archaea, improving metabolite degradation and efficient conversion to methane, the addition of conductive materials, mainly carbon based have been studied. This research seeks to highlight the effect that coconut biochar (CBC) has on the metanogenic conversion of the organic fraction of municipal solid waste (OFMSW), analyzing the surface chemistry properties that give biochar its capacity to serve as a redox mediator in the anaerobic digestion process. The biochar characterization techniques were electrical conductivity (EC) scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), Fourier Transform Infrared Transmission Spectroscopy (FTIR) and Cyclic Voltammetry (CV). Effect of coconut biochar addition was studied using Authomatic Methane Potential Test System (AMPTS II) applying a one-way variance analysis to determine the dose that leads to higher methane performance. The surface chemistry of the CBC could confer properties that enhance the AD process, such as the presence of alkaline and alkaline earth metals and their hydrophobicity that may be related to their buffering capacity and the adsorption of polar and non-polar compounds, such as NH4+ and CO2. It also has aromatic functional groups, just as quinones, whose potential as a redox mediator has been demonstrated and its morphology allows it to form an immobilizing matrix that favors a closer activity among the syntrophic microorganisms, which directly contributed in the oxidation of secondary metabolites and the final reduction to methane, whose yield is increased by 39% compared to controls, with a CBC dose of 1 g/L.

Keywords: anaerobic digestion, biochar, biostimulation, syntrophic metabolism

Procedia PDF Downloads 182
3217 A Standard-Based Competency Evaluation Scale for Preparing Qualified Adapted Physical Education Teachers

Authors: Jiabei Zhang

Abstract:

Although adapted physical education (APE) teacher preparation programs are available in the nation, a consistent standards-based competency evaluation scale for preparing of qualified personnel for teaching children with disabilities in APE cannot be identified in the literature. The purpose of this study was to develop a standard-based competency evaluation scale for assessing qualifications for teaching children with disabilities in APE. Standard-based competencies were reviewed and identified based on research evidence documented as effective in teaching children with disabilities in APE. A standard-based competency scale was developed for assessing qualifications for teaching children with disabilities in APE. This scale included 20 standard-based competencies and a 4-point Likert-type scale for each standard-based competency. The first standard-based competency is knowledgeable of the causes of disabilities and their effects. The second competency is the ability to assess physical education skills of children with disabilities. The third competency is able to collaborate with other personnel. The fourth competency is knowledgeable of the measurement and evaluation. The fifth competency is to understand federal and state laws. The sixth competency is knowledgeable of the unique characteristics of all learners. The seventh competency is the ability to write in behavioral terms for objectives. The eighth competency is knowledgeable of developmental characteristics. The ninth competency is knowledgeable of normal and abnormal motor behaviors. The tenth competency is the ability to analyze and adapt the physical education curriculums. The eleventh competency is to understand the history and the philosophy of physical education. The twelfth competency is to understand curriculum theory and development. The thirteenth competency is the ability to utilize instructional designs and plans. The fourteenth competency is the ability to create and implement physical activities. The fifteenth competency is the ability to utilize technology applications. The sixteenth competency is to understand the value of program evaluation. The seventeenth competency is to understand professional standards. The eighteenth competency is knowledgeable of the focused instruction and individualized interventions. The nineteenth competency is able to complete a research project independently. The twentieth competency is to teach children with disabilities in APE independently. The 4-point Likert-type scale ranges from 1 for incompetent to 4 for highly competent. This scale is used for assessing if one completing all course works is eligible for receiving an endorsement for teaching children with disabilities in APE, which is completed based on the grades earned on three courses targeted for each standard-based competency. A mean grade received in three courses primarily addressing a standard-based competency will be marked on a competency level in the above scale. The level 4 is marked for a mean grade of A one receives over three courses, the level 3 for a mean grade of B over three courses, and so on. One should receive a mean score of 3 (competent level) or higher (highly competent) across 19 standard-based competencies after completing all courses specified for receiving an endorsement for teaching children with disabilities in APE. The validity, reliability, and objectivity of this standard-based competency evaluation scale are to be documented.

Keywords: evaluation scale, teacher preparation, adapted physical education teachers, and children with disabilities

Procedia PDF Downloads 111
3216 The Model of Learning Centre on OTOP Production Process Based on Sufficiency Economic Philosophy for Sustainable Life Quality

Authors: Napasri Suwanajote

Abstract:

The purposes of this research were to analyse and evaluate successful factors in OTOP production process for the developing of learning centre on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The research has been designed as a qualitative study to gather information from 30 OTOP producers in Bangkontee District, Samudsongkram Province. They were all interviewed on 3 main parts. Part 1 was about the production process including 1) production 2) product development 3) the community strength 4) marketing possibility and 5) product quality. Part 2 evaluated appropriate successful factors including 1) the analysis of the successful factors 2) evaluate the strategy based on Sufficiency Economic Philosophy and 3) the model of learning centre on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality. The results showed that the production did not affect the environment with potential in continuing standard quality production. They used the raw materials in the country. On the aspect of product and community strength in the past 1 year, it was found that there was no appropriate packaging showing product identity according to global market standard. They needed the training on packaging especially for food and drink products. On the aspect of product quality and product specification, it was found that the products were certified by the local OTOP standard. There should be a responsible organization to help the uncertified producers pass the standard. However, there was a problem on food contamination which was hazardous to the consumers. The producers should cooperate with the government sector or educational institutes involving with food processing to reach FDA standard. The results from small group discussion showed that the community expected high education and better standard living. Some problems reported by the community included informal debt and drugs in the community. There were 8 steps in developing the model of learning centre on OTOP production process based on Sufficiency Economic Philosophy for sustainable life quality.

Keywords: production process, OTOP, sufficiency economic philosophy, marketing management

Procedia PDF Downloads 226
3215 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 218
3214 The Impact of Introspective Models on Software Engineering

Authors: Rajneekant Bachan, Dhanush Vijay

Abstract:

The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.

Keywords: software engineering, architectures, introspective models, operating systems

Procedia PDF Downloads 529
3213 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 182
3212 Reliability of 2D Motion Analysis System for Sagittal Plane Lower Limb Kinematics during Running

Authors: Seyed Hamed Mousavi, Juha M. Hijmans, Reza Rajabi, Ron Diercks, Johannes Zwerver, Henk van der Worp

Abstract:

Introduction: Running is one of the most popular sports activity among people. Improper sagittal plane ankle, knee and hip kinematics are considered to be associated with the increase of injury risk in runners. Motion assessing smart-phone applications are increasingly used to measure kinematics both in the field and laboratory setting, as they are cheaper, more portable, accessible, and easier to use relative to 3D motion analysis system. The aims of this study are 1) to compare the results of 3D gait analysis system and CE; 2) to evaluate the test-retest and intra-rater reliability of coach’s eye (CE) app for the sagittal plane hip, knee, and ankle angles in the touchdown and toe-off while running. Method: Twenty subjects participated in this study. Sixteen reflective markers and cluster markers were attached to the subject’s body. Subjects were asked to run at a self-selected speed on a treadmill. Twenty-five seconds of running were collected for analyzing kinematics of interest. To measure sagittal plane hip, knee and ankle joint angles at touchdown (TD) and toe off (TO), the mean of first ten acceptable consecutive strides was calculated for each angle. A smartphone (Samsung Note5, android) was placed on the right side of the subject so that whole body was simultaneously filmed with 3D gait system during running. All subjects repeated the task with the same running speed after a short interval of 5 minutes in between. The CE app, installed on the smartphone, was used to measure the sagittal plane hip, knee and ankle joint angles at touchdown and toe off the stance phase. Results: Intraclass correlation coefficient (ICC) was used to assess test-retest and intra-rater reliability. To analyze the agreement between 3D and 2D outcomes, the Bland and Altman plot was used. The values of ICC were for Ankle at TD (TRR=0.8,IRR=0.94), ankle at TO (TRR=0.9,IRR=0.97), knee at TD (TRR=0.78,IRR=0.98), knee at TO (TRR=0.9,IRR=0.96), hip at TD (TRR=0.75,IRR=0.97), hip at TO (TRR=0.87,IRR=0.98). The Bland and Altman plots displaying a mean difference (MD) and ±2 standard deviation of MD (2SDMD) of 3D and 2D outcomes were for Ankle at TD (MD=3.71,+2SDMD=8.19, -2SDMD=-0.77), ankle at TO (MD=-1.27, +2SDMD=6.22, -2SDMD=-8.76), knee at TD (MD=1.48, +2SDMD=8.21, -2SDMD=-5.25), knee at TO (MD=-6.63, +2SDMD=3.94, -2SDMD=-17.19), hip at TD (MD=1.51, +2SDMD=9.05, -2SDMD=-6.03), hip at TO (MD=-0.18, +2SDMD=12.22, -2SDMD=-12.59). Discussion: The ability that the measurements are accurately reproduced is valuable in the performance and clinical assessment of outcomes of joint angles. The results of this study showed that the intra-rater and test-retest reliability of CE app for all kinematics measured are excellent (ICC ≥ 0.75). The Bland and Altman plots display that there are high differences of values for ankle at TD and knee at TO. Measuring ankle at TD by 2D gait analysis depends on the plane of movement. Since ankle at TD mostly occurs in the none-sagittal plane, the measurements can be different as foot progression angle at TD increases during running. The difference in values of the knee at TD can depend on how 3D and the rater detect the TO during the stance phase of running.

Keywords: reliability, running, sagittal plane, two dimensional

Procedia PDF Downloads 195
3211 Effect of Blood Sugar Levels on Short Term and Working Memory Status in Type 2 Diabetics

Authors: Mythri G., Manjunath ML, Girish Babu M., Shireen Swaliha Quadri

Abstract:

Background: The increase in diabetes among the elderly is of concern because in addition to the wide range of traditional diabetes complications, evidence has been growing that diabetes is associated with increased risk of cognitive decline. Aims and Objectives: To find out if there is any association between blood sugar levels and short-term and working memory status in patients of type 2 diabetes. Materials and Methods: The study was carried out in 200 individuals aged between 40-65 years consisting of 100 diagnosed cases of Type 2 Diabetes Mellitus and 100 non-diabetics from OPD of Mc Gann Hospital, Shivamogga. Rye’s Auditory Verbal Learning Test, Verbal Fluency Test and Visual Reproduction Test, Working Digit Span Test and Validation Span Test were used to assess short-term and working memory. Fasting and Post Prandial blood sugar levels were estimated. Statistical analysis was done using SPSS 21. Results: Memory test scores of type 2 diabetics were significantly reduced (p < 0.001) when compared to the memory scores of age and gender matched non-diabetics. Fasting blood sugar levels were found to have a negative correlation with memory scores for all 5 tests: AVLT (r=-0.837), VFT (r=-0.888), VRT(r=-0.787), WDST (r=-0.795) and VST (r=-0.943). Post- Prandial blood sugar levels were found to have a negative correlation with memory scores for all 5 tests: AVLT (r=-0.922), VFT (r=-0.848), VRT(r=-0.707),WDST (r=-0.729) and VST (r=-0.880) Memory scores in all 5 tests were found to be negatively correlated with the FBS and PPBS levels in diabetic patients (p < 0.001). Conclusion: The decreased memory status in diabetic patients may be due to many factors like hyperglycemia, vascular disease, insulin resistance, amyloid deposition and also some of the factor combine to produce additive effects like, type of diabetes, co-morbidities, age of onset, duration of the disease and type of therapy. These observed effects of blood sugar levels of diabetics on memory status are of potential clinical importance because even mild cognitive impairment could interfere with todays’ activities.

Keywords: diabetes, cognition, diabetes, HRV, respiratory medicine

Procedia PDF Downloads 277
3210 Engineering Method to Measure the Impact Sound Improvement with Floor Coverings

Authors: Katarzyna Baruch, Agata Szelag, Jaroslaw Rubacha, Bartlomiej Chojnacki, Tadeusz Kamisinski

Abstract:

Methodology used to measure the reduction of transmitted impact sound by floor coverings situated on a massive floor is described in ISO 10140-3: 2010. To carry out such tests, the standardised reverberation room separated by a standard floor from the second measuring room are required. The need to have a special laboratory results in high cost and low accessibility of this measurement. The authors propose their own engineering method to measure the impact sound improvement with floor coverings. This method does not require standard rooms and floor. This paper describes the measurement procedure of proposed engineering method. Further, verification tests were performed. Validation of the proposed method was based on the analytical model, Statistical Energy Analysis (SEA) model and empirical measurements. The received results were related to corresponding ones obtained from ISO 10140-3:2010 measurements. The study confirmed the usefulness of the engineering method.

Keywords: building acoustic, impact noise, impact sound insulation, impact sound transmission, reduction of impact sound

Procedia PDF Downloads 319
3209 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 178
3208 Perception of Predictive Confounders for the Prevalence of Hypertension among Iraqi Population: A Pilot Study

Authors: Zahraa Albasry, Hadeel D. Najim, Anmar Al-Taie

Abstract:

Background: Hypertension is considered as one of the most important causes of cardiovascular complications and one of the leading causes of worldwide mortality. Identifying the potential risk factors associated with this medical health problem plays an important role in minimizing its incidence and related complications. The objective of this study is to explore the prevalence of receptor sensitivity regarding assess and understand the perception of specific predictive confounding factors on the prevalence of hypertension (HT) among a sample of Iraqi population in Baghdad, Iraq. Materials and Methods: A randomized cross sectional study was carried out on 100 adult subjects during their visit to the outpatient clinic at a certain sector of Baghdad Province, Iraq. Demographic, clinical and health records alongside specific screening and laboratory tests of the participants were collected and analyzed to detect the potential of confounding factors on the prevalence of HT. Results: 63% of the study participants suffered from HT, most of them were female patients (P < 0.005). Patients aged between 41-50 years old significantly suffered from HT than other age groups (63.5%, P < 0.001). 88.9% of the participants were obese (P < 0.001) and 47.6% had diabetes with HT. Positive family history and sedentary lifestyle were significantly higher among all hypertensive groups (P < 0.05). High salt and fatty food intake was significantly found among patients suffered from isolated systolic hypertension (ISHT) (P < 0.05). A significant positive correlation between packed cell volume (PCV) and systolic blood pressure (SBP) (r = 0.353, P = 0.048) found among normotensive participants. Among hypertensive patients, a positive significant correlation found between triglycerides (TG) and both SBP (r = 0.484, P = 0.031) and diastolic blood pressure (DBP) (r = 0.463, P = 0.040), while low density lipoprotein-cholesterol (LDL-c) showed a positive significant correlation with DBP (r = 0.443, P = 0.021). Conclusion: The prevalence of HT among Iraqi populations is of major concern. Further consideration is required to detect the impact of potential risk factors and to minimize blood pressure (BP) elevation and reduce the risk of other cardiovascular complications later in life.

Keywords: Correlation, Hypertension, Iraq, Risk factors

Procedia PDF Downloads 126
3207 The Relationship between Personal, Psycho-Social and Occupational Risk Factors with Low Back Pain Severity in Industrial Workers

Authors: Omid Giahi, Ebrahim Darvishi, Mahdi Akbarzadeh

Abstract:

Introduction: Occupational low back pain (LBP) is one of the most prevalent work-related musculoskeletal disorders in which a lot of risk factors are involved that. The present study focuses on the relation between personal, psycho-social and occupational risk factors and LBP severity in industrial workers. Materials and Methods: This research was a case-control study which was conducted in Kurdistan province. 100 workers (Mean Age ± SD of 39.9 ± 10.45) with LBP were selected as the case group, and 100 workers (Mean Age ± SD of 37.2 ± 8.5) without LBP were assigned into the control group. All participants were selected from various industrial units, and they had similar occupational conditions. The required data including demographic information (BMI, smoking, alcohol, and family history), occupational (posture, mental workload (MWL), force, vibration and repetition), and psychosocial factors (stress, occupational satisfaction and security) of the participants were collected via consultation with occupational medicine specialists, interview, and the related questionnaires and also the NASA-TLX software and REBA worksheet. Chi-square test, logistic regression and structural equation modeling (SEM) were used to analyze the data. For analysis of data, IBM Statistics SPSS 24 and Mplus6 software have been used. Results: 114 (77%) of the individuals were male and 86 were (23%) female. Mean Career length of the Case Group and Control Group were 10.90 ± 5.92, 9.22 ± 4.24, respectively. The statistical analysis of the data revealed that there was a significant correlation between the Posture, Smoking, Stress, Satisfaction, and MWL with occupational LBP. The odds ratios (95% confidence intervals) derived from a logistic regression model were 2.7 (1.27-2.24) and 2.5 (2.26-5.17) and 3.22 (2.47-3.24) for Stress, MWL, and Posture, respectively. Also, the SEM analysis of the personal, psycho-social and occupational factors with LBP revealed that there was a significant correlation. Conclusion: All three broad categories of risk factors simultaneously increase the risk of occupational LBP in the workplace. But, the risks of Posture, Stress, and MWL have a major role in LBP severity. Therefore, prevention strategies for persons in jobs with high risks for LBP are required to decrease the risk of occupational LBP.

Keywords: industrial workers occupational, low back pain, occupational risk factors, psychosocial factors

Procedia PDF Downloads 253
3206 Exploration Of The Nonlinear Viscoelastic Behavior Of Yogurt Using Lissajous Curves

Authors: Hugo Espinosa-Andrews

Abstract:

Introduction: Yogurt is widely accepted worldwide due to its high nutritional value, consistency, and texture. Their rheological properties play a significant role in consumer acceptance and are related to the manufacturing process and formulation. Typically, the viscoelastic characteristics of yogurts are studied using the small amplitude oscillatory shear test; however, the initial stages of flow and oral processing are described in the nonlinear zone, in which a large amplitude oscillatory stress test is applied. The objective of this work was to analyze the nonlinear viscoelastic behavior of commercial yogurts using Lissajous curves. Methods: Two commercial yogurts were purchased in a local store in Guadalajara Jalisco Mexico: a natural Greek-style yogurt and a low-fat traditional yogurt. Viscoelastic properties were evaluated using a large amplitude oscillatory stress procedure (LAOS). A crosshatch geometry of 40 mm and a truncation of 1000 µm were used. Stress sweeps were performed at 6.28 rad/s from 1 to 250 Pa at 5°C. The nonlinear viscoelastic properties were analyzed using the Lissajous curves. Results: The yogurts showed strain-viscoelastic behavior related to deformation-dependent materials. In the low-strain region, the elastic modulus predominated over the viscous modulus, showing gel-elastic properties. The sol-gel transitions were observed at approximately 66.5 Pa for the Greek yogurt, double that detected for traditional yogurt. The viscoelastic behavior of the yogurts was characteristic of weak excess deformation: behavior indicating a stable molecular structure at rest, and moderate structure at medium shear-forces. The normalized Lissajous curves characterized viscoelastic transitions of the yogurt as the stress increased. Greater viscoelasticity deformation was observed in Greek yogurt than in traditional yogurt, which is related to the presence of a protein network with a greater degree of crosslinking. Conclusions: The yogurt composition influences the viscoelastic properties of the material. Yogurt with the higher percentage of protein has greater viscoelastic and viscous properties, which describe a product of greater consistency and creaminess.

Keywords: yogurt, viscoelastic properties, LAOS, elastic modulus

Procedia PDF Downloads 10
3205 Building Climate Resilience in the Health Sector in Developing Countries: Experience from Tanzania

Authors: Hussein Lujuo Mohamed

Abstract:

Introduction: Public health has always been influenced by climate and weather. Changes in climate and climate variability, particularly changes in weather extremes affect the environment that provides people with clean air, food, water, shelter, and security. Tanzania is not an exception to the threats of climate change. The health sector is mostly affected due to emergence and proliferation of infectious diseases, thereby affecting health of the population and thus impacting achievement of sustainable development goals. Methodology: A desk review on documented issues pertaining to climate change and health in Tanzania was done using Google search engine. Keywords included climate change, link, health, climate initiatives. In cases where information was not available, documents from Ministry of Health, Vice Presidents Office-Environment, Local Government Authority, Ministry of Water, WHO, research, and training institutions were reviewed. Some of the reviewed documents from these institutions include policy brief papers, fieldwork activity reports, training manuals, and guidelines. Results: Six main climate resilience activities were identified in Tanzania. These were development and implementation of climate resilient water safety plans guidelines both for rural and urban water authorities, capacity building of rural and urban water authorities on implementation of climate-resilient water safety plans, and capacity strengthening of local environmental health practitioners on mainstreaming climate change and health into comprehensive council health plans. Others were vulnerability and adaptation assessment for the health sector, mainstreaming climate change in the National Health Policy, and development of risk communication strategy on climate. In addition information, education, and communication materials on climate change and to create awareness were developed aiming to sensitize and create awareness among communities on climate change issues and its effect on public health. Conclusion: Proper implementation of these interventions will help the country become resilient to many impacts of climate change in the health sector and become a good example for other least developed countries.

Keywords: climate, change, Tanzania, health

Procedia PDF Downloads 113
3204 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 416
3203 Acrylate-Based Photopolymer Resin Combined with Acrylated Epoxidized Soybean Oil for 3D-Printing

Authors: Raphael Palucci Rosa, Giuseppe Rosace

Abstract:

Stereolithography (SLA) is one of the 3D-printing technologies that has been steadily growing in popularity for both industrial and personal applications due to its versatility, high accuracy, and low cost. Its printing process consists of using a light emitter to solidify photosensitive liquid resins layer-by-layer to produce solid objects. However, the majority of the resins used in SLA are derived from petroleum and characterized by toxicity, stability, and recalcitrance to degradation in natural environments. Aiming to develop an eco-friendly resin, in this work, different combinations of a standard commercial SLA resin (Peopoly UV professional) with a vegetable-based resin were investigated. To reach this goal, different mass concentrations (varying from 10 to 50 wt%) of acrylated epoxidized soybean oil (AESO), a vegetable resin produced from soyabean oil, were mixed with a commercial acrylate-based resin. 1.0 wt% of Diphenyl(2,4,6-trimethylbenzoyl) phosphine oxide (TPO) was used as photo-initiator, and the samples were printed using a Peopoly moai 130. The machine was set to operate at standard configurations when printing commercial resins. After the print was finished, the excess resin was drained off, and the samples were washed in isopropanol and water to remove any non-reacted resin. Finally, the samples were post-cured for 30 min in a UV chamber. FT-IR analysis was used to confirm the UV polymerization of the formulated resin with different AESO/Peopoly ratios. The signals from 1643.7 to 1616, which corresponds to the C=C stretching of the AESO acrylic acids and Peopoly acrylic groups, significantly decreases after the reaction. The signal decrease indicates the consumption of the double bonds during the radical polymerization. Furthermore, the slight change of the C-O-C signal from 1186.1 to 1159.9 decrease of the signals at 809.5 and 983.1, which corresponds to unsaturated double bonds, are both proofs of the successful polymerization. Mechanical analyses showed a decrease of 50.44% on tensile strength when adding 10 wt% of AESO, but it was still in the same range as other commercial resins. The elongation of break increased by 24% with 10 wt% of AESO and swelling analysis showed that samples with a higher concentration of AESO mixed absorbed less water than their counterparts. Furthermore, high-resolution prototypes were printed using both resins, and visual analysis did not show any significant difference between both products. In conclusion, the AESO resin was successful incorporated into a commercial resin without affecting its printability. The bio-based resin showed lower tensile strength than the Peopoly resin due to network loosening, but it was still in the range of other commercial resins. The hybrid resin also showed better flexibility and water resistance than Peopoly resin without affecting its resolution. Finally, the development of new types of SLA resins is essential to provide new sustainable alternatives to the commercial petroleum-based ones.

Keywords: 3D-printing, bio-based, resin, soybean, stereolithography

Procedia PDF Downloads 120
3202 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning

Authors: R. Abdulrahman, A. Eardley, A. Soliman

Abstract:

The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.

Keywords: mobile learning, nursing institute students’ acceptance of m-learning activity in Saudi Arabia, unified theory of acceptance and use of technology model (UTAUT), structural equation modelling (SEM)

Procedia PDF Downloads 180
3201 Computer Self-Efficacy, Study Behaviour and Use of Electronic Information Resources in Selected Polytechnics in Ogun State, Nigeria

Authors: Fredrick Olatunji Ajegbomogun, Bello Modinat Morenikeji, Okorie Nancy Chituru

Abstract:

Electronic information resources are highly relevant to students' academic and research needs but are grossly underutilized, despite the institutional commitment to making them available. The under-utilisation of these resources could be attributed to a low level of study behaviour coupled with a low level of computer self-efficacy. This study assessed computer self-efficacy, study behaviour, and the use of electronic information resources by students in selected polytechnics in Ogun State. A simple random sampling technique using Krejcie and Morgan's (1970) Table was used to select 370 respondents for the study. A structured questionnaire was used to collect data on respondents. Data were analysed using frequency counts, percentages, mean, standard deviation, Pearson Product Moment Correlation (PPMC) and multiple regression analysis. Results reveal that the internet (= 1.94), YouTube (= 1.74), and search engines (= 1.72) were the common information resources available to the students, while the Internet (= 4.22) is the most utilized resource. Major reasons for using electronic information resources were to source materials and information (= 3.30), for research (= 3.25), and to augment class notes (= 2.90). The majority (91.0%) of the respondents have a high level of computer self-efficacy in the use of electronic information resources through selecting from screen menus (= 3.12), using data files ( = 3.10), and efficient use of computers (= 3.06). Good preparation for tests (= 3.27), examinations (= 3.26), and organization of tutorials (= 3.11) are the common study behaviours of the respondents. Overall, 93.8% have good study behaviour. Inadequate computer facilities to access information (= 3.23), and poor internet access (= 2.87) were the major challenges confronting students’ use of electronic information resources. According to the PPMC results, study behavior (r = 0.280) and computer self-efficacy (r = 0.304) have significant (p 0.05) relationships with the use of electronic information resources. Regression results reveal that self-efficacy (=0.214) and study behavior (=0.122) positively (p 0.05) influenced students' use of electronic information resources. The study concluded that students' use of electronic information resources depends on the purpose, their computer self-efficacy, and their study behaviour. Therefore, the study recommended that the management should encourage the students to improve their study habits and computer skills, as this will enhance their continuous and more effective utilization of electronic information resources.

Keywords: computer self-efficacy, study behaviour, electronic information resources, polytechnics, Nigeria

Procedia PDF Downloads 115
3200 Comparing Remote Sensing and in Situ Analyses of Test Wheat Plants as Means for Optimizing Data Collection in Precision Agriculture

Authors: Endalkachew Abebe Kebede, Bojin Bojinov, Andon Vasilev Andonov, Orhan Dengiz

Abstract:

Remote sensing has a potential application in assessing and monitoring the plants' biophysical properties using the spectral responses of plants and soils within the electromagnetic spectrum. However, only a few reports compare the performance of different remote sensing sensors against in-situ field spectral measurement. The current study assessed the potential applications of open data source satellite images (Sentinel 2 and Landsat 9) in estimating the biophysical properties of the wheat crop on a study farm found in the village of OvchaMogila. A Landsat 9 (30 m resolution) and Sentinel-2 (10 m resolution) satellite images with less than 10% cloud cover have been extracted from the open data sources for the period of December 2021 to April 2022. An Unmanned Aerial Vehicle (UAV) has been used to capture the spectral response of plant leaves. In addition, SpectraVue 710s Leaf Spectrometer was used to measure the spectral response of the crop in April at five different locations within the same field. The ten most common vegetation indices have been selected and calculated based on the reflectance wavelength range of remote sensing tools used. The soil samples have been collected in eight different locations within the farm plot. The different physicochemical properties of the soil (pH, texture, N, P₂O₅, and K₂O) have been analyzed in the laboratory. The finer resolution images from the UAV and the Leaf Spectrometer have been used to validate the satellite images. The performance of different sensors has been compared based on the measured leaf spectral response and the extracted vegetation indices using the five sampling points. A scatter plot with the coefficient of determination (R2) and Root Mean Square Error (RMSE) and the correlation (r) matrix prepared using the corr and heatmap python libraries have been used for comparing the performance of Sentinel 2 and Landsat 9 VIs compared to the drone and SpectraVue 710s spectrophotometer. The soil analysis revealed the study farm plot is slightly alkaline (8.4 to 8.52). The soil texture of the study farm is dominantly Clay and Clay Loam.The vegetation indices (VIs) increased linearly with the growth of the plant. Both the scatter plot and the correlation matrix showed that Sentinel 2 vegetation indices have a relatively better correlation with the vegetation indices of the Buteo dronecompared to the Landsat 9. The Landsat 9 vegetation indices somewhat align better with the leaf spectrometer. Generally, the Sentinel 2 showed a better performance than the Landsat 9. Further study with enough field spectral sampling and repeated UAV imaging is required to improve the quality of the current study.

Keywords: landsat 9, leaf spectrometer, sentinel 2, UAV

Procedia PDF Downloads 99
3199 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items

Authors: Wen-Chung Wang, Xue-Lan Qiu

Abstract:

Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.

Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison

Procedia PDF Downloads 244
3198 From Mimetic to Mnemonic: On the Simultaneous Rise of Language and Religion

Authors: Dmitry Usenco

Abstract:

The greatest paradox about the origin of language is the fact that, while language is always taught by adults to children, it can never be learnt properly unless its acquisition occurs during childhood. The question that naturally arises in that respect is as follows: How could language be taught for the first time by a non-speaker, i.e., by someone who did not have the opportunity to master it as a child? Yet the above paradox will appear less unresolvable if we hypothesise that language was originally introduced not as a means of communication but as a relatively modest training/playing technique that was used to develop the learners’ mimetic skills. Its communicative and expressive properties could have been discovered and exploited later – upon the learners’ reaching their adulthood. The importance of mimesis in children’s development is universally recognised. The most common forms of it are onomatopoeia and mime, which consist in reproducing sounds and imitating shapes/movements of externally observed objects. However, in some cases, neither of these exercises can be adequate to the task. An object, especially an inanimate one, may emit no characteristic sounds, making onomatopoeia problematic. In other cases, it may have no easily reproduceable shape, while its movements may depend on the specific way of our interacting with it. On such occasions, onomatopoeia and mime can perhaps be supplemented, or even replaced, by movements of the tongue which can metonymically represent certain aspects of our interaction with the object. This is especially evident with consonants: e.g., a fricative sound can designate the subject’s relatively slow approach to the object or vice versa, while a plosive one can express the relatively abrupt process of grabbing/sticking or parrying/bouncing. From that point of view, a protoword can be regarded as a sophisticated gesture of the tongue but also as a mnemonic sequence that contains encoded instructions about the way to handle the object. When this originally subjective link between the object and its mimetic/mnemonic representation eventually installs itself in the collective mind (however small at first the community might be), the initially nameless object acquires a name, and the first word is created. (Discussing the difference between proper and common names is out of the scope of this paper). In its very beginning, this word has two major applications. It can be used for interhuman communication because it allows us to invoke the presence of a currently absent object. It can also be used for designing, expressing, and memorising our interaction with the object itself. The first usage gives rise to language, the second to religion. By the act of naming, we attach to the object a mental (‘spiritual’) dimension which has an independent existence in our collective mind. By referring to the name (idea/demon/soul) of the object, we perform our first act of spirituality, our first religious observance. This is the beginning of animism – arguably, the most ancient form of religion. To conclude: the rise of religion is simultaneous with the the emergence of language in human evolution.

Keywords: language, religion, origin, acquisition, childhood, adulthood, play, represntation, onomatopoeia, mime, gesture, consonant, simultaneity, spirituality, animism

Procedia PDF Downloads 72
3197 Rheological Evaluation of a Mucoadhesive Precursor of Based-Poloxamer 407 or Polyethylenimine Liquid Crystal System for Buccal Administration

Authors: Jéssica Bernegossi, Lívia Nordi Dovigo, Marlus Chorilli

Abstract:

Mucoadhesive liquid crystalline systems are emerging how delivery systems for oral cavity. These systems are interesting since they facilitate the targeting of medicines and change the release enabling a reduction in the number of applications made by the patient. The buccal mucosa is permeable besides present a great blood supply and absence of first pass metabolism, it is a good route of administration. It was developed two systems liquid crystals utilizing as surfactant the ethyl alcohol ethoxylated and propoxylated (30%) as oil phase the oleic acid (60%), and the aqueous phase (10%) dispersion of polymer polyethylenimine (0.5%) or dispersion of polymer poloxamer 407 (16%), with the intention of applying the buccal mucosa. Initially, was performed for characterization of systems the conference by polarized light microscopy and rheological analysis. For the preparation of the systems the components described was added above in glass vials and shaken. Then, 30 and 100% artificial saliva were added to each prepared formulation so as to simulate the environment of the oral cavity. For the verification of the system structure, aliquots of the formulations were observed in glass slide and covered with a coverslip, examined in polarized light microscope (PLM) Axioskop - Zeizz® in 40x magnifier. The formulations were also evaluated for their rheological profile Rheometer TA Instruments®, which were obtained rheograms the selected systems employing fluency mode (flow) in temperature of 37ºC (98.6ºF). In PLM, it was observed that in formulations containing polyethylenimine and poloxamer 407 without the addition of artificial saliva was observed dark-field being indicative of microemulsion, this was also observed with the formulation that was increased with 30% of the artificial saliva. In the formulation that was increased with 100% simulated saliva was shown to be a system structure since it presented anisotropy with the presence of striae being indicative of hexagonal liquid crystalline mesophase system. Upon observation of rheograms, both systems without the addition of artificial saliva showed a Newtonian profile, after addition of 30% artificial saliva have been given a non-Newtonian behavior of the pseudoplastic-thixotropic type and after adding 100% of the saliva artificial proved plastic-thixotropic. Furthermore, it is clearly seen that the formulations containing poloxamer 407 have significantly larger (15-800 Pa) shear stress compared to those containing polyethyleneimine (5-50 Pa), indicating a greater plasticity of these. Thus, it is possible to observe that the addition of saliva was of interest to the system structure, starting from a microemulsion for a liquid crystal system, thereby also changing thereby its rheological behavior. The systems have promising characteristics as controlled release systems to the oral cavity, as it features good fluidity during its possible application and greater structuring of the system when it comes into contact with environmental saliva.

Keywords: liquid crystal system, poloxamer 407, polyethylenimine, rheology

Procedia PDF Downloads 447
3196 Ecocentric Principles for the Change of the Anthropocentric Design Within the Other Species Related Fields

Authors: Armando Cuspinera

Abstract:

Humans are nature itself, being with non-human species part of the same ecosystem, but the praxis reflects that not all relations are the same. In fields of design such as Biomimicry, Biodesign, and Biophilic design exist different approaches towards nature, nevertheless, anthropocentric principles such as domination, objectivization, or exploitation are defined in the same as ecocentric principles of inherent importance in life itself. Anthropocentrism has showed humanity with pollution of the earth, water, air, and the destruction of whole ecosystems from monocultures and rampant production of useless objects that life cannot outstand this unaware rhythm of life focused only for the human benefits. Even if by nature the biosphere is resilient, studies showed in the Paris Agreement explain that humanity will perish in an unconscious way of praxis. This is why is important to develop a differentiation between anthropocentric and ecocentricprinciples in the praxis of design, in order to enhance respect, valorization, and positive affectivity towards other life forms is necessary to analyze what principles are reproduced from each practice of design. It is only from the study of immaterial dimensions of design such as symbolism, epistemology, and ontology that the relation towards nature can be redesigned, and in order to do so, it must be studies from the dimensions of ontological design what principles –anthropocentric or ecocentric- through what the objects enhance or focus the perception humans have to its surrounding. The things we design also design us is the principle of ontological design, and in order to develop a way of ecological design in which is possible to consider other species as users, designers or collaborators is important to extend the studies and relation to other living forms from a transdisciplinary perspective of techniques, knowledge, practice, and disciplines in general. Materials, technologies, and any kind of knowledge have the principle of a tool: is not good nor bad, but is in the way of using it the possibilities that exist within them. The collaboration of disciplines and fields of study gives the opportunity to connect principles from other cultures such as Deep Ecology and Environmental Humanities in the development of methodologies of design that study nature, integrates their strategies to our own species, and considers life of other species as important as human life, and is only form the studies of ontological design that material and immaterial dimensions can be analyzed and imbued with structures that already exist in other fields.

Keywords: design, antropocentrism, ecocentrism, ontological design

Procedia PDF Downloads 146