Search results for: computational fluid dynamic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6917

Search results for: computational fluid dynamic

317 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis

Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon

Abstract:

Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.

Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles

Procedia PDF Downloads 362
316 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach

Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake

Abstract:

Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.

Keywords: ANSYS, floating, piezoelectric, squeeze-film

Procedia PDF Downloads 123
315 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 99
314 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall

Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono

Abstract:

Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.

Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall

Procedia PDF Downloads 168
313 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method

Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov

Abstract:

The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.

Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection

Procedia PDF Downloads 187
312 La0.80Ag0.15MnO3 Magnetic Nanoparticles for Self-Controlled Magnetic Fluid Hyperthermia

Authors: Marian Mihalik, Kornel Csach, Martin Kovalik, Matúš Mihalik, Martina Kubovčíková, Maria Zentková, Martin Vavra, Vladimír Girman, Jaroslav Briančin, Marija Perovic, Marija Boškovic, Magdalena Fitta, Robert Pelka

Abstract:

Current nanomaterials for use in biomedicine are based mainly on iron oxides and on present knowledge on magnetic nanostructures. Manganites can represent another material which can be used optionally. Manganites and their unique electronic properties have been extensively studied in the last decades not only due to fundamental interest but to possible applications of colossal magnetoresistance, magnetocaloric effect, and ferroelectric properties. It was found that the oxygen-reduction reaction on perovskite oxide is intimately connected with metal ion e.g., orbital occupation. The effect of oxygen deviation from the stoichiometric composition on crystal structure was studied very carefully by many authors on LaMnO₃. Depending on oxygen content, the crystal structure changes from orthorhombic one to rhombohedric for oxygen content 3.1. In the case of hole-doped manganites, the change from the orthorhombic crystal structure, which is typical for La1-xCaxMnO3 based manganites, to the rhombohedric crystal structure (La1-xMxMnO₃ where M = K, Ag, and Sr based materials) results in an enormous increase of the Curie temperature. In our paper, we study the effect of oxygen content on crystal structure, thermal, and magnetic properties (including magnetocaloric effect) of La1-xAgxMnO₃nano particle system. The content of oxygen in samples was tuned by heat treatment in different thermal regimes and in various environment (air, oxygen, argon). Water nanosuspensions based on La0.80Ag0.15MnO₃ magnetic particles with the Curie temperature of about 43oC were prepared by two different approaches. First, by using a laboratory circulation mill for milling of powder in the presence of sodium dodecyl sulphate (SDS) and subsequent centrifugation. Second nanosuspension was prepared using an agate bowl, etching in citric acid and HNO3, ultrasound homogeniser, centrifugation, and dextran 40 kDA or 15 kDA as surfactant. Electrostatic stabilisation obtained by the first approach did not offer long term kinetic and aggregation colloidal stability and was unable to compensate for attractive forces between particles under a magnetic field. By the second approach, we prepared suspension oversaturated by dextran 40 kDA for steric stabilisation, with evidence of the presence of superparamagnetic behaviour. Low concentration of nanoparticles and not ideal coverage of nanoparticles impacting the stability of ferrofluids was the disadvantage of this approach. Strong steric stabilisation was observable at alcaic conditions under pH = ~10. Application of dextran 15 kDA leads to relatively stable ferrofluid with pH around physiological conditions, but desegregation of powder by HNO₃ was not effective enough, and the average size of fragments was to large of about 150 nm, and we did not see any signature of superparamagnetic behaviour. The prepared ferrofluids were characterised by scanning and transition microscope method, thermogravimetry, magnetization, and AC susceptibility measurements. Specific Absorption Rate measurements were undertaken on powder as well on ferrofluids in order to estimate the potential application of La₀.₈₀Ag₀.₁₅MnO₃ magnetic particles based ferrofluid for hyperthermia. Our complex study contains an investigation of biocompatibility and potential biohazard of this material.

Keywords: manganites, magnetic nanoparticles, oxygen content, magnetic phase transition, magnetocaloric effect, ferrofluid, hyperthermia

Procedia PDF Downloads 64
311 The Touch Sensation: Ageing and Gender Influences

Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani

Abstract:

A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.

Keywords: ageing, finger, gender, touch

Procedia PDF Downloads 243
310 Family Resilience of Children with Cancer: A Latent Profile Analysis

Authors: Bowen Li, Dan Shu, Shiguang Pang, Li Wang, Qian Liu

Abstract:

Background: Every year, approximately 429,000 adolescents aged 0-19 are diagnosed with cancer worldwide. The diagnosis brings about substantial psychological pressure and caregiving responsibilities for family members and impacts the families significantly. Family resilience has been found to reduce caregiver distress and can also foster post-traumatic growth in cancer survivors. However, current research on family resilience in childhood cancer mainly focuses on individual caregiver resilience and child adaptation, with less attention given to categorizing family resilience among caregivers of children with cancer. Method: A total of 292 caregivers of children with cancer were recruited from four tertiary hospitals in central China from July 2022 to March 2024. This study was approved by the ethics committee, and participants provided informed consent, with the option to withdraw at any time. The Family Resilience Assessment Scale was used to measure family resilience among caregivers of children with cancer. The Quality of Life scale-family, The Perceived Social Support Scale, and The Connor-Davidson Resilience Scale were used to measure potential influencing factors. This study used latent profile analysis (LPA) to identify latent categories of family resilience among caregivers of children with cancer. Binary logistic regression was used to analyze the influencing factors of family resilience. Results: The results reveal two distinct categories: "high family resilience" and "low family resilience." "Low family resilience" group accounts for 85.96% of the total while "high family resilience" group is 14.04%. "High family resilience" scores higher across all dimensions compared to "low family resilience". Within-group comparisons reveals that "family communication and problem-solving" and "empowering the meaning of adversity" received the highest scores, while "utilizing social and economic resources" scores the lowest. "Maintaining a positive attitude" scores similarly high to "family communication and problem-solving" in the high family resilience group, whereas it scores similarly low to "utilizing social and economic resources" in the low family resilience group. In single-factor analysis, residence, number of siblings, caregiver's education level, resilience, social support, quality of life, physical well-being and psychological well-being showed significant difference between two categories. In binary logistic regression analysis, households with only one child are more likely to exhibit low family resilience, whereas high personal resilience is associated with a high level of family resilience. Conclusion: Most families with children suffering from cancer require strengthened family resilience. Support for utilizing socio-economic resources is important for both high and low family resilience families. Single-child families and caregivers with lower resilience require more attention. These findings imply the development of targeted interventions to enhance family resilience among families with children of cancer. Future studies could involve children and other family members for a comprehensive understanding of family resilience. Longitudinal studies are necessary to explore the dynamic changes in family resilience throughout the cancer journey.

Keywords: cancer children, caregivers, family resilience, latent profile analysis

Procedia PDF Downloads 15
309 Killing for the Great Peace: An Internal Perspective on the Anti-Manchu Theme in the Taiping Movement

Authors: Zihao He

Abstract:

The majority of existing studies on the Taiping Movement (1851-1864) viewed their anti-Manchu attitudes as nationalist agendas: Taiping was aimed at revolting against the Manchu government and establishing a new political regime. To explain these aggressive and violent attitudes towards Manchu, these studies mainly found socio-economic factors and stressed the status of “being deprived”. Even the ‘demon-slaying’ narrative of the Taiping to dehumanize the Manchu tends to be viewed as a “religious tool” to achieve their political, nationalist aim. This paper argues that these studies on Taiping’s anti-Manchu attitudes and behaviors are analyzed from an external angle and have two major problems. Firstly, they distinguished “religion” from “nationalist” or “political”, focusing on the “political” nature of the movement. “Religion” and the religious experience within Taiping were largely ignored. This paper argues that there was no separable and independent “religion” in the Taiping Movement, as opposed to secular, nationalist politics. Secondly, these analyses held an external perspective on Taiping’s anti-Manchu agenda. Demonizing and killing Manchu were viewed as purely political actions. On the contrary, this paper focuses on the internal perspective of anti-Manchu narratives in the Taiping Movement. The method of this paper is mainly textual analysis, focusing on the official documents, edicts, and proclamations of the Taiping movement. It views the writing of the Taiping as a coherent narrative and rhetoric, which was attractive and convincing for its followers. In terms of the main findings, firstly, internal and external perspectives on anti-Manchu violence are different. Externally, violence was viewed as a tool and necessary process to achieve the political goal. However, internally speaking, in Taiping’s writing, violence was a result of Godlessness, which would be solved as far as the faith in God is restored in China. Having a framework of universal love among human beings as sons and daughters of the Heavenly Father and killing was forbidden, the Taiping excluded Manchus from the family of human beings and demonized them. “Demon-slaying” was not violence. It was constructed as a necessary process to achieve the Great Peace. Moreover, Taiping’s anti-Manchu violence was not merely “political.” Rather, the category “religion” and its binary opposition, “secular,” is not suitable for Taiping. A key point related to this argument is the revolutionary violence against the Manchu government, which inherited the traditional “Heavenly Mandate” model. From an internal, theological perspective, anti-Manchu was ordained and commanded by the Heavenly Father. Manchu, as a regime, was standing as a hindrance in the path toward God. Besides, Manchu was not only viewed as a regime, but they were also “demons.” Therefore, the paper examines how Manchus were dehumanized in Taiping’s writings and were situated outside of the consideration of nonviolent and love. Manchu as a regime and Manchu as demons are in a dynamic relationship. As a regime, the Manchu government was preventing Chinese people from worshipping the Heavenly Father, so they were demonized. As they were demons, killing Manchus during the revolt was justified and not viewed as being contradicted the universal love among human beings.

Keywords: anti-manchu, demon-slaying, heavenly mandate, religion and violence, the taiping movement.

Procedia PDF Downloads 52
308 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 203
307 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 103
306 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation

Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako

Abstract:

The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.

Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential

Procedia PDF Downloads 71
305 Enhancement of Radiosensitization by Aptamer 5TR1-Functionalized AgNCs for Triple-Negative Breast Cancer

Authors: Xuechun Kan, Dongdong Li, Fan Li, Peidang Liu

Abstract:

Triple-negative breast cancer (TNBC) is the most malignant subtype of breast cancer with a poor prognosis, and radiotherapy is one of the main treatment methods. However, due to the obvious resistance of tumor cells to radiotherapy, high dose of ionizing radiation is required during radiotherapy, which causes serious damage to normal tissues near the tumor. Therefore, how to improve radiotherapy resistance and enhance the specific killing of tumor cells by radiation is a hot issue that needs to be solved in clinic. Recent studies have shown that silver-based nanoparticles have strong radiosensitization, and silver nanoclusters (AgNCs) also provide a broad prospect for tumor targeted radiosensitization therapy due to their ultra-small size, low toxicity or non-toxicity, self-fluorescence and strong photostability. Aptamer 5TR1 is a 25-base oligonucleotide aptamer that can specifically bind to mucin-1 highly expressed on the membrane surface of TNBC 4T1 cells, and can be used as a highly efficient tumor targeting molecule. In this study, AgNCs were synthesized by DNA template based on 5TR1 aptamer (NC-T5-5TR1), and its role as a targeted radiosensitizer in TNBC radiotherapy was investigated. The optimal DNA template was first screened by fluorescence emission spectroscopy, and NC-T5-5TR1 was prepared. NC-T5-5TR1 was characterized by transmission electron microscopy, ultraviolet-visible spectroscopy and dynamic light scattering. The inhibitory effect of NC-T5-5TR1 on cell activity was evaluated using the MTT method. Laser confocal microscopy was employed to observe NC-T5-5TR1 targeting 4T1 cells and verify its self-fluorescence characteristics. The uptake of NC-T5-5TR1 by 4T1 cells was observed by dark-field imaging, and the uptake peak was evaluated by inductively coupled plasma mass spectrometry. The radiation sensitization effect of NC-T5-5TR1 was evaluated through cell cloning and in vivo anti-tumor experiments. Annexin V-FITC/PI double staining flow cytometry was utilized to detect the impact of nanomaterials combined with radiotherapy on apoptosis. The results demonstrated that the particle size of NC-T5-5TR1 is about 2 nm, and the UV-visible absorption spectrum detection verifies the successful construction of NC-T5-5TR1, and it shows good dispersion. NC-T5-5TR1 significantly inhibited the activity of 4T1 cells and effectively targeted and fluoresced within 4T1 cells. The uptake of NC-T5-5TR1 reached its peak at 3 h in the tumor area. Compared with AgNCs without aptamer modification, NC-T5-5TR1 exhibited superior radiation sensitization, and combined radiotherapy significantly inhibited the activity of 4T1 cells and tumor growth in 4T1-bearing mice. The apoptosis level of NC-T5-5TR1 combined with radiation was significantly increased. These findings provide important theoretical and experimental support for NC-T5-5TR1 as a radiation sensitizer for TNBC.

Keywords: 5TR1 aptamer, silver nanoclusters, radio sensitization, triple-negative breast cancer

Procedia PDF Downloads 24
304 Iraqi Women’s Rights Under State Civil Law and Conservative Influences: A Study of Legal Documents and Social Implementation

Authors: Rose Hattab

Abstract:

Women have been an important dynamic in religious context and the state-building process of Arab countries throughout history. During the 1970s as the movement for women’s activism and rights developed, the Iraqi state under the Ba’ath Party began to provide Iraqi women with legal and civil rights. This was done to liberate women from the grasps of social traditions and was a tangible espousing of equality between men and women in the process of nation-building. Whereas women’s rights were stronger and more supported throughout the earliest years of the Ba’ath Regime (1970-1990), the aftermath of the Gulf War and economic sanctions on the conditions of Iraqi society laid the foundation for a division of women’s rights between civil and religious authorities. Personal status codes that were secured in 1959 were being pushed back by amendments made in coordination with religious leaders. Civil laws were present on paper, but religious authority took prominence in practice. The written legal codes were inclusive of women’s rights, but there is not an active or ensured practice of these rights within Iraqi society. This is due to many different factors, such as religious, sectarian, political and conservative reasons that hold back or limit the ability for Iraqi women to have autonomy in aspects such as participation in the workforce, getting married, and ensuring social justice. This paper argues that the Personal Status Code introduced in 1959 – which replaced Sharia-run courts with personal status courts – provided Iraqi women with equality and increased mobility in social and economic dynamics. The statewide crisis felt after the Gulf War and the economic sanctions imposed by the United Nations led to a stark shift in the Ba’ath party’s political ideology. This ideological turn guided the social system to the embracement of social conservatism and religious traditions in the 1990s. The effect of this implementation continued after the establishment of a new Iraqi government during 2003-2005. Consequently, Iraqi women's rights in employment, marriage, and family became divided into paper and practice by religious authorities and civil law from that period to the present day. This paper also contributes to the literature by expanding on the gap between legal codes on paper and in practice, through providing an analysis of Iraqi women’s rights in the Iraqi Constitution of 2005 and Iraq’s Penal Code. The turn to conservative and religious traditions is derived from the multiplicity of identities that make up the Iraqi social fabric. In the aftermath of a totalitarian regime, active wars, and economic sanctions, the Iraqi people attempted to unite together through their different identities to create a sense of security in the midst of violence and chaos. This is not an excuse to diminish the importance of women’s rights, but in the process of building a new nation-state, women were lost from the narrative. Thus, the presence of gender equity is found in the written text but is not practiced and upheld in the social context.

Keywords: civil rights, Iraqi women, nation building, religion and conflict

Procedia PDF Downloads 121
303 High Throughput Virtual Screening against ns3 Helicase of Japanese Encephalitis Virus (JEV)

Authors: Soma Banerjee, Aamen Talukdar, Argha Mandal, Dipankar Chaudhuri

Abstract:

Japanese Encephalitis is a major infectious disease with nearly half the world’s population living in areas where it is prevalent. Currently, treatment for it involves only supportive care and symptom management through vaccination. Due to the lack of antiviral drugs against Japanese Encephalitis Virus (JEV), the quest for such agents remains a priority. For these reasons, simulation studies of drug targets against JEV are important. Towards this purpose, docking experiments of the kinase inhibitors were done against the chosen target NS3 helicase as it is a nucleoside binding protein. Previous efforts regarding computational drug design against JEV revealed some lead molecules by virtual screening using public domain software. To be more specific and accurate regarding finding leads, in this study a proprietary software Schrödinger-GLIDE has been used. Druggability of the pockets in the NS3 helicase crystal structure was first calculated by SITEMAP. Then the sites were screened according to compatibility with ATP. The site which is most compatible with ATP was selected as target. Virtual screening was performed by acquiring ligands from databases: KinaseSARfari, KinaseKnowledgebase and Published inhibitor Set using GLIDE. The 25 ligands with best docking scores from each database were re-docked in XP mode. Protein structure alignment of NS3 was performed using VAST against MMDB, and similar human proteins were docked to all the best scoring ligands. The low scoring ligands were chosen for further studies and the high scoring ligands were screened. Seventy-three ligands were listed as the best scoring ones after performing HTVS. Protein structure alignment of NS3 revealed 3 human proteins with RMSD values lesser than 2Å. Docking results with these three proteins revealed the inhibitors that can interfere and inhibit human proteins. Those inhibitors were screened. Among the ones left, those with docking scores worse than a threshold value were also removed to get the final hits. Analysis of the docked complexes through 2D interaction diagrams revealed the amino acid residues that are essential for ligand binding within the active site. Interaction analysis will help to find a strongly interacting scaffold among the hits. This experiment yielded 21 hits with the best docking scores which could be investigated further for their drug like properties. Aside from getting suitable leads, specific NS3 helicase-inhibitor interactions were identified. Selection of Target modification strategies complementing docking methodologies which can result in choosing better lead compounds are in progress. Those enhanced leads can lead to better in vitro testing.

Keywords: antivirals, docking, glide, high-throughput virtual screening, Japanese encephalitis, ns3 helicase

Procedia PDF Downloads 199
302 Improving Efficiency of Organizational Performance: The Role of Human Resources in Supply Chains and Job Rotation Practice

Authors: Moh'd Anwer Al-Shboul

Abstract:

Jordan Customs (JC) has been established to achieve objectives that must be consistent with the guidance of the wise leadership and its aspirations toward tomorrow. Therefore, it has developed several needed tools to provide a distinguished service to simplify work procedures and used modern technologies. A supply chain (SC) consists of all parties that are involved directly or indirectly in order to fulfill a customer request, which includes manufacturers, suppliers, shippers, retailers and even customer brokers. Within each firm, the SC includes all functions involved in receiving a filling a customers’ requests; one of the main functions include customer service. JC and global SCs are evolving into dynamic environment, which requires flexibility, effective communication, and team management. Thus, human resources (HRs) insight in these areas are critical for the effective development of global process network. The importance of HRs has increased significantly due to the role of employees depends on their knowledge, competencies, abilities, skills, and motivations. Strategic planning in JC began at the end of the 1990’s including operational strategy for Human Resource Management and Development (HRM&D). However, a huge transformation in human resources happened at the end of 2006; new employees’ regulation for customs were prepared, approved and applied at the end of 2007. Therefore, many employees lost their positions, while others were selected based on professorial recruitment and selection process (enter new blood). One of several policies that were applied by human resources in JC department is job rotation. From the researcher’s point of view, it was not based on scientific basis to achieve its goals and objectives, which at the end leads to having a significant negative impact on the Organizational Performance (OP) and weak job rotation approach. The purpose of this study is to call attention to re-review the applying process and procedure of job rotation that HRM directorate is currently applied at JC. Furthermore, it presents an overview of managing the HRs in the SC network that affects their success. The research methodology employed in this study was described as qualitative by conducting few interviews with managers, internal employee, external clients and reviewing the related literature to collect some qualitative data from secondary sources. Thus, conducting frequently and unstructured job rotation policy (i.e. monthly) will have a significant negative impact on JC performance as a whole. The results of this study show that the main impacts will affect on three main elements in JC: (1) internal employees' performance; (2) external clients, who are dealing with customs services; and finally, JC performance as a whole. In order to implement a successful and perfect job rotation technique at JC in a scientific way and to achieve its goals and objectives; JCs should be taken into consideration the proposed solutions and recommendations that will be presented in this study.

Keywords: efficiency, supply chain, human resources, job rotation, organizational performance, Jordan customs

Procedia PDF Downloads 191
301 Post COVID-19 Multi-System Inflammatory Syndrome Masquerading as an Acute Abdomen

Authors: Ali Baker, Russel Krawitz

Abstract:

This paper describes a rare occurrence where a potentially fatal complication of COVID-19 infection (MIS-A) was misdiagnosed as an acute abdomen. As most patients with this syndrome present with fever and gastrointestinal symptoms, they may inadvertently fall under the care of the surgical unit. However, unusual imaging findings and a poor response to anti-microbial therapy should prompt clinicians to suspect a non-surgical etiology. More than half of MIS-A patients require ICU admission and vasopressor support. Prompt referral to a physician is key, as the cornerstone of treatment is IVIG and corticosteroid therapy. A 32 year old woman presented with right sided abdominal pain and fevers. She had also contracted COVID-19 two months earlier. Abdominal examination revealed generalised right sided tenderness. The patient had raised inflammatory markers, but other blood tests were unremarkable. CT scan revealed extensive lymphadenopathy along the ileocolic chain. The patient proved to be a diagnostic dilemma. She was reviewed by several surgical consultants and discussed with several inpatient teams. Although IV antibiotics were commenced, the right sided abdominal pain, and fevers persisted. Pan-culture returned negative. A mild cholestatic derangement developed. On day 5, the patient underwent preparation for colonoscopy to assess for a potential intraluminal etiology. The following day, the patient developed sinus tachycardia and hypotension that was refractory to fluid resuscitation. That patient was transferred to ICU and required vasopressor support. Repeat CT showed peri-portal edema and a thickened gallbladder wall. On re-examination, the patient was Murphy’s sign positive. Biliary ultrasound was equivocal for cholecystitis. The patient was planned for diagnostic laparoscopy. The following morning, a marked rise in cardiac troponin was discovered, and a follow-up echocardiogram revealed moderate to severe global systolic dysfunction. The impression was post-COVID MIS with myocardial involvement. IVIG and Methylprednisolone infusions were commenced. The patient had a great response. Vasopressor support was weaned, and the patient was discharged from ICU. The patient continued to improve clinically with oral prednisolone, and was discharged on day 17. Although MIS following COVID-19 infection is well-described syndrome in children, only recently has it come to light that it can occur in adults. The exact incidence is unknown, but it is thought to be rare. A recent systematic review found only 221 cases of MIS-A, which could be included for analysis. Symptoms vary, but the most frequent include fever, gastrointestinal, and mucocutaneous. Many patients progress to multi-organ failure and require vasopressor support. 7% succumb to the illness. The pathophysiology of MIS is only partly understood. It shares similarities with Kawasaki disease, macrophage activation syndrome, and cytokine release syndrome. Importantly, by definition, the patient must have an absence of severe respiratory symptoms. It is thought to be due to a dysregulated immune response to the virus. Potential mechanisms include reduced levels of neutralising antibodies and autoreactive antibodies that promote inflammation. Further research into MIS-A is needed. Although rare, this potentially fatal syndrome should be considered in the unwell surgical patient who has recently contracted COVID-19 and poses a diagnostic dilemma.

Keywords: acute-abdomen, MIS, COVID-19, ICU

Procedia PDF Downloads 100
300 Benefits of The ALIAmide Palmitoyl-Glucosamine Co-Micronized with Curcumin for Osteoarthritis Pain: A Preclinical Study

Authors: Enrico Gugliandolo, Salvatore Cuzzocrea, Rosalia Crupi

Abstract:

Osteoarthritis (OA) is one of the most common chronic pain conditions in dogs and cats. OA pain is currently viewed as a mixed phenomenon involving both inflammatory and neuropathic mechanisms at the peripheral (joint) and central (spinal and supraspinal) levels. Oxidative stress has been implicated in OA pain. Although nonsteroidal anti-inflammatory drugs are commonly prescribed for OA pain, they should be used with caution in pets because of adverse effects in the long term and controversial efficacy on neuropathic pain. An unmet need remains for safe and effective long-term treatments for OA pain. Palmitoyl-glucosamine (PGA) is an analogue of the ALIAamide palmitoylethanolamide, i.e., a body’s own endocannabinoid-like compound playing a sentinel role in nociception. PGA, especially in the micronized formulation, was shown safe and effective in OA pain. The aim of this study was to investigate the effect of a co-micronized formulation of PGA with the natural antioxidant curcumin (PGA-cur) on OA pain. Ten Sprague-Dawley male rats were used for each treatment group. The University of Messina Review Board for the care and use of animals authorized the study. On day 0, rats were anesthetized (5.0% isoflurane in 100% O2) and received intra-articular injection of MIA (3 mg in 25 μl saline) in the right knee joint, with the left being injected an equal volume of saline. Starting the third day after MIA injection, treatments were administered orally three times per week for 21 days, at the following doses: PGA 20 mg/kg, curcumin 10 mg/kg, PGA-cur (2:1 ratio) 30 mg/kg. On day 0 and 3, 7, 14 and 21 days post-injection, mechanical allodynia was measured using a dynamic plantar Von Frey hair aesthesiometer and expressed as paw withdrawal threshold (PWT) and latency (PWL). Motor functional recovery of the rear limb was evaluated on the same time points by walking track analysis using the sciatic functional index. On day 21 post-MIA injection, the concentration of the following inflammatory and nociceptive mediators was measured in serum using commercial ELISA kits: tumor necrosis factor alpha (TNF-α), interleukin-1 beta (IL-1β), nerve growth factor (NGF) and matrix metalloproteinase-1-3-9 (MMP-1, MMP-3, MMP-9). The results were analyzed by ANOVA followed by Bonferroni post-hoc test for multiple comparisons. Micronized PGA reduced neuropathic pain, as shown by the significant higher PWT and PWL values compared to vehicle group (p < 0.0001 for all the evaluated time points). The effect of PGA-cur was superior at all time points (p < 0.005). PGA-cur restored motor function already on day 14 (p < 0.005), while micronized PGA was effective a week later (D21). MIA-induced increase in the serum levels of all the investigated mediators was inhibited by PGA-cur (p < 0.01). PGA was also effective, except on IL-1 and MMP-3. Curcumin alone was inactive in all the experiments at any time point. The encouraging results suggest that PGA-cur may represent a valuable option in OA pain management and warrant further confirmation in well-powered clinical trials.

Keywords: ALIAmides, curcumin, osteoarthritis, palmitoyl-glucosamine

Procedia PDF Downloads 84
299 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication

Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons

Abstract:

The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.

Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture

Procedia PDF Downloads 143
298 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 617
297 Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases

Authors: Andrew Cummings, Rhianne Boag, Sarah Bryson, Gordon Turner

Abstract:

Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.

Keywords: aerosol processes, Brownian coagulation, gravitational settling, transport regulations

Procedia PDF Downloads 92
296 Numerical Analyses of Dynamics of Deployment of PW-Sat2 Deorbit Sail Compared with Results of Experiment under Micro-Gravity and Low Pressure Conditions

Authors: P. Brunne, K. Ciechowska, K. Gajc, K. Gawin, M. Gawin, M. Kania, J. Kindracki, Z. Kusznierewicz, D. Pączkowska, F. Perczyński, K. Pilarski, D. Rafało, E. Ryszawa, M. Sobiecki, I. Uwarowa

Abstract:

Big amount of space debris constitutes nowadays a real thread for operating space crafts; therefore the main purpose of PW-Sat2’ team was to create a system that could help cleanse the Earth’s orbit after each small satellites’ mission. After 4 years of development, the motorless, low energy consumption and low weight system has been created. During series of tests, the system has shown high reliable efficiency. The PW-Sat2’s deorbit system is a square-shaped sail which covers an area of 4m². The sail surface is made of 6 μm aluminized Mylar film which is stretched across 4 diagonally placed arms, each consisting of two C-shaped flat springs and enveloped in Mylar sleeves. The sail is coiled using a special, custom designed folding stand that provides automation and repeatability of the sail unwinding tests and placed in a container with inner diameter of 85 mm. In the final configuration the deorbit system weights ca. 600 g and occupies 0.6U (in accordance with CubeSat standard). The sail’s releasing system requires minimal amount of power based on thermal knife that burns out the Dyneema wire, which holds the system before deployment. The Sail is being pushed out of the container within a safe distance (20 cm away) from the satellite. The energy for the deployment is completely assured by coiled C-shaped flat springs, which during the release, unfold the sail surface. To avoid dynamic effects on the satellite’s structure, there is the rotational link between the sail and satellite’s main body. To obtain complete knowledge about complex dynamics of the deployment, a number of experiments have been performed in varied environments. The numerical model of the dynamics of the Sail’s deployment has been built and is still under continuous development. Currently, the integration of the flight model and Deorbit Sail is performed. The launch is scheduled for February 2018. At the same time, in cooperation with United Nations Office for Outer Space Affairs, sail models and requested facilities are being prepared for the sail deployment experiment under micro-gravity and low pressure conditions at Bremen Drop Tower, Germany. Results of those tests will provide an ultimate and wide knowledge about deployment in space environment to which system will be exposed during its mission. Outcomes of the numerical model and tests will be compared afterwards and will help the team in building a reliable and correct model of a very complex phenomenon of deployment of 4 c-shaped flat springs with surface attached. The verified model could be used inter alia to investigate if the PW-Sat2’s sail is scalable and how far is it possible to go with enlarging when creating systems for bigger satellites.

Keywords: cubesat, deorbitation, sail, space, debris

Procedia PDF Downloads 266
295 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 137
294 Brand Building in Higher Education: A Grounded Theory Investigation of the Impact of the ‘Positive-Visualization-Course in Brand Identity’ upon Freshmen Student's Perception

Authors: Maria Kountouridou, Dino Domic

Abstract:

Within an increasingly competitive and dynamic environment, the higher education sector is becoming more commodified, with the concept of branding to become exceedingly imperative and an inextricable ingredient for the university’s success. Branding in higher education has proven to be an effective strategy that managed to receive considerable attention in the recent few years, and a growing number of articles have begun to appear in the literature. However, a clear void in the literature confirms that the concept of students’ perceptions towards the university’s brand image has not been researched extensively. An investigation on this central concept is of paramount importance since it will facilitate the development of an inductively generated theoretical model concerning branding in higher education. This research focuses on examining the impact of the ‘positive-visualization-course in brand identity’ upon the perception of freshmen students towards a university’s brand image. A grounded theory methodology has been selected, consisting of semi-structured interviews. Forty-two students have participated in the research, among which twenty-five women and seventeen men. The identification of the sample emerged through the use of the snowball sampling technique. The participants were divided into two groups (experimental and control group) after the researcher had taken into consideration the factor ‘program of study’, to eliminate any possible interaction between the participants of each group. An experiment was carried out where a ‘positive-visualization-course in brand identity’ was conducted among the participants of the experimental group, while the participants of the control group have not been exposed to the course. For the purpose of this research, the term ‘positive-visualization-course in brand identity’ refers to a course where brand history, past achievements/recognitions/awards, its values, and its mission are presented. Prior to the course implementation, face-to-face semi-structured interviews were carried out among the participants of both groups, with the aim of examining the freshmen students’ perceptions towards the university’s brand image. One week after the course implementation, the researcher carried out semi-structured interviews with the participants of the experimental group only in order to identify whether students’ perceptions had been affected after the course completion. Four months after the course completion, semi-structured interviews were carried out among the participants of both groups. Eight months after the course completion, semi-structured interviews were conducted with the aim of identifying the freshmen students’ updated perceptions. Data has been analyzed using substantive coding (open and selective coding), theoretical coding, field memos, and constant comparative analysis. The findings strongly suggest that the ‘positive-visualization-course in brand identity’ can positively affect freshmen students’ perceptions towards a university’s brand image. Additionally, other factors conduce to the formation of perception throughout the months. This study contributes and expands upon the existing literature by presenting an inductively generated theoretical model to guide future research in the links between ‘positive-visualization-course in brand identity’ and the perception of freshmen students towards a university’s brand image.

Keywords: brand image, brand name, branding, higher education marketing, perception

Procedia PDF Downloads 154
293 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda

Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi

Abstract:

Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.

Keywords: land use change, urbanization, Kigali City, Landsat

Procedia PDF Downloads 287
292 Dietary Intake and Nutritional Inadequacy Leading to Malnutrition among Children Residing in Shelter Home, Rural Tamil Nadu, India

Authors: Niraimathi Kesavan, Sangeeta Sharma, Deepa Jagan, Sridhar Sukumar, Mohan Ramachandran, Vidhubala Elangovan

Abstract:

Background: Childhood is a dynamic period for growth and development. Optimum nutrition during this period forms a strong foundation for growth, development, resistance to infections, long-term good health, cognition, educational achievements, and work productivity in a later phase of life. Underprivileged children living in a resource constraint settings like shelter homes are at high risk of malnutrition due to poor quality diet and nutritional inadequacy. In low-income countries, underprivileged children are vulnerable to being deprived of nutritious food, which stands as a major challenge in the health sector. The present aims to assess the dietary intake, nutritional status, and nutritional inadequacy and their association with malnutrition among children residing in shelter homes in rural Tamil Nadu. Methods: The study was a descriptive survey conducted among all the children aged between 8-18 years residing in two selected shelter homes (Anbu illam, a home for female children, and Amaidhi illam, a home for male children), rural Tirunelveli, Tamil Nadu, India. A total of 57 children were recruited, including 18 boys and 39 girls, for the study. Dietary intake was measured using seven days 24 hours recall. The average nutrient intake was considered for further analysis. Results: Of the 57 children, about 60% (n=35) were undernutrition. The mean daily energy intake was 1298 (SD 180) kcal for boys and 952 (SD155) kcal for girls. The total calorie intake was 55-60% below the estimated average requirement (EAR) for adolescent boys and girls in the age group 13-15 years and 16-18 years. Carbohydrates were the major source of energy (boys 53% and girls 51%), followed by fat (boys 31.5% and girls 34.5%) and protein (boys 14% and girls 12.9%). Dairy intake (<200ml/day) was less than the recommendation (500ml/day). Micro-nutrient-rich foods such as fruits, vegetables, and green leafy vegetables in the diet were <200g/day, which was far less than the recommended dietary guidelines of 400g- 600g/day for the age group of 7-18 years. Nearly 26% of girls reported experiencing menstrual problems. The majority (76.9%) of the children exhibited nutrient deficiency-related signs and symptoms. Conclusion: The total energy, minerals, and micro-nutrient intake were inadequate and below the Recommended Dietary Allowance for children and adolescents. The diet predominantly consists of refined cereals, rice, semolina, and vermicelli. Consumption of whole grains, milk, fruits, vegetables, and leafy vegetables was far below the recommended dietary guidelines. Dietary inadequacies among these children pose a serious concern for their overall health status and its consequences in the later phase of life.

Keywords: adolescents, children, dietary intake, malnutrition, nutritional inadequacy, shelter home

Procedia PDF Downloads 58
291 Green Production of Chitosan Nanoparticles and their Potential as Antimicrobial Agents

Authors: L. P. Gomes, G. F. Araújo, Y. M. L. Cordeiro, C. T. Andrade, E. M. Del Aguila, V. M. F. Paschoalin

Abstract:

The application of nanoscale materials and nanostructures is an emerging area, these since materials may provide solutions to technological and environmental challenges in order to preserve the environment and natural resources. To reach this goal, the increasing demand must be accompanied by 'green' synthesis methods. Chitosan is a natural, nontoxic, biopolymer derived by the deacetylation of chitin and has great potential for a wide range of applications in the biological and biomedical areas, due to its biodegradability, biocompatibility, non-toxicity and versatile chemical and physical properties. Chitosan also presents high antimicrobial activities against a wide variety of pathogenic and spoilage microorganisms. Ultrasonication is a common tool for the preparation and processing of polymer nanoparticles. It is particularly effective in breaking up aggregates and in reducing the size and polydispersity of nanoparticles. High-intensity ultrasonication has the potential to modify chitosan molecular weight and, thus, alter or improve chitosan functional properties. The aim of this study was to evaluate the influence of sonication intensity and time on the changes of commercial chitosan characteristics, such as molecular weight and its potential antibacterial activity against Gram-negative bacteria. The nanoparticles (NPs) were produced from two commercial chitosans, of medium molecular weight (CS-MMW) and low molecular weight (CS-LMW) from Sigma-Aldrich®. These samples (2%) were solubilized in 100 mM sodium acetate pH 4.0, placed on ice and irradiated with an ultrasound SONIC ultrasonic probe (model 750 W), equipped with a 1/2" microtip during 30 min at 4°C. It was used on constant duty cycle and 40% amplitude with 1/1s intervals. The ultrasonic degradation of CS-MMW and CS-LMW were followed up by means of ζ-potential (Brookhaven Instruments, model 90Plus) and dynamic light scattering (DLS) measurements. After sonication, the concentrated samples were diluted 100 times and placed in fluorescence quartz cuvettes (Hellma 111-QS, 10 mm light path). The distributions of the colloidal particles were calculated from the DLS and ζ-potential are measurements taken for the CS-MMW and CS-LMW solutions before and after (CS-MMW30 and CS-LMW30) sonication for 30 min. Regarding the results for the chitosan sample, the major bands can be distinguished centered at Radius hydrodynamic (Rh), showed different distributions for CS-MMW (Rh=690.0 nm, ζ=26.52±2.4), CS-LMW (Rh=607.4 and 2805.4 nm, ζ=24.51±1.29), CS-MMW30 (Rh=201.5 and 1064.1 nm, ζ=24.78±2.4) and CS-LMW30 (Rh=492.5, ζ=26.12±0.85). The minimal inhibitory concentration (MIC) was determined using different chitosan samples concentrations. MIC values were determined against to E. coli (106 cells) harvested from an LB medium (Luria-Bertani BD™) after 18h growth at 37 ºC. Subsequently, the cell suspension was serially diluted in saline solution (0.8% NaCl) and plated on solid LB at 37°C for 18 h. Colony-forming units were counted. The samples showed different MICs against E. coli for CS-LMW (1.5mg), CS-MMW30 (1.5 mg/mL) and CS-LMW30 (1.0 mg/mL). The results demonstrate that the production of nanoparticles by modification of their molecular weight by ultrasonication is simple to be performed and dispense acid solvent addition. Molecular weight modifications are enough to provoke changes in the antimicrobial potential of the nanoparticles produced in this way.

Keywords: antimicrobial agent, chitosan, green production, nanoparticles

Procedia PDF Downloads 303
290 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 230
289 Mistletoe Supplementation and Exercise Training on IL-1β and TNF-α Levels

Authors: Alireza Barari, Ahmad Abdi

Abstract:

Introduction: Plyometric training (PT) is popular among individuals involved in dynamic sports, and is executed with a goal to improve muscular performance. Cytokines are considered as immunoregulatory molecules for regulation of immune function and other body responses. In addition, the pro-inflammatory cytokines, TNF-α andIL-1β, have been reported to be increased during and after exercises. If some of the cytokines which cause responses such as inflammation of cells in skeletal muscles, with manipulating of training program or optimizing nutrition, it can be avoided or limited from those injuries caused by cytokines release. Its shows that mistletoe extracts show immune-modulating effects. Materials and methods: present study was to investigate the effect of six weeks PT with or without mistletoe supplementation (MS)(10 mg/kg) on cytokine responses and performance in male basketball players. This study is semi-experimental. Statistic society of this study was basketball player’s male students of Mahmoud Abad city. Statistic samples are concluded of 32 basketball players with an age range of 14–17 years was selected from randomly. Selection of samples in four groups of 8 individuals Participants were randomly assigned to either an experimental group (E, n=16) that performed plyometric exercises with (n=8) or without (n=8) MS, or a control group that rested (C, n=16) with (n=8) or without (n=8) MS. Plants were collected in June from the Mazandaran forest in north of Iran. Then they dried in exposure to air without any exposition to sunlight, on a clean textile. For better drying the plants were high and down until they lost their water. Each subject consumed 10 mg/kg/day of extract for six weeks of intervention. Pre and post-testing was performed in the afternoon of the same day. Blood samples (10 ml) were collected from the intermediate cubital vein of the subjects. Serum concentration of IL-1β and TNF-α were measured by ELISA method. Data analysis was performed using pretest to posttest changes that assessed by t-test for paired samples. After the last plyometric training program, the second blood samples were in the next day. Group differences at baseline were evaluated using One-way ANOVA (post-hock Tukey) test is used for analysis and comparison of three group’s variables. Results: PT with or without MS improved the one repetition maximum leg and chest press, Sargeant test and power in RAST (P < 0.05). However there were no statistically significant differences between groups in Vo2max measures (P > 0.05). PT resulted in a significant increase in plasma IL-1β concentration from 1.08±0.4 mg/ml in pre-training to 1.68±0.18 mg/ml in post-training (P=0.006). While the MS significantly decreased the training-induced increment of IL-1β (P=0.007). In contrast, neither PT nor MS had any effect on TNF-α levels (P > 0.05). Discussion: The results of this investigation indicate that PT improved muscular performance and increases the IL-1β concentration. Increasing of IL-1β after exercise in damaged skeletal muscle has shown of the role of this cytokine in inflammation processes and damaged skeletal muscle repair. However mistletoe supplementation ameliorates the increment of IL-1β levels, indicating the beneficial effect of mistletoe on immune response following plyometric training.

Keywords: mistletoe supplementation, training, IL-1β, TNF-α

Procedia PDF Downloads 628
288 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 213