Search results for: analytical procedure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4373

Search results for: analytical procedure

323 Analysis of Aspergillus fumigatus IgG Serologic Cut-Off Values to Increase Diagnostic Specificity of Allergic Bronchopulmonary Aspergillosis

Authors: Sushmita Roy Chowdhury, Steve Holding, Sujoy Khan

Abstract:

The immunogenic responses of the lung towards the fungus Aspergillus fumigatus may range from invasive aspergillosis in the immunocompromised, fungal ball or infection within a cavity in the lung in those with structural lung lesions, or allergic bronchopulmonary aspergillosis (ABPA). Patients with asthma or cystic fibrosis are particularly predisposed to ABPA. There are consensus guidelines that have established criteria for diagnosis of ABPA, but uncertainty remains on the serologic cut-off values that would increase the diagnostic specificity of ABPA. We retrospectively analyzed 80 patients with severe asthma and evidence of peripheral blood eosinophilia ( > 500) over the last 3 years who underwent all serologic tests to exclude ABPA. Total IgE, specific IgE and specific IgG levels against Aspergillus fumigatus were measured using ImmunoCAP Phadia-100 (Thermo Fisher Scientific, Sweden). The Modified ISHAM working group 2013 criteria (obligate criteria: asthma or cystic fibrosis, total IgE > 1000 IU/ml or > 417 kU/L and positive specific IgE Aspergillus fumigatus or skin test positivity; with ≥ 2 of peripheral eosinophilia, positive specific IgG Aspergillus fumigatus and consistent radiographic opacities) was used in the clinical workup for the final diagnosis of ABPA. Patients were divided into 3 groups - definite, possible, and no evidence of ABPA. Specific IgG Aspergillus fumigatus levels were not used to assign the patients into any of the groups. Of 80 patients (males 48, females 32; mean age 53.9 years ± SD 15.8) selected for the analysis, there were 30 patients who had positive specific IgE against Aspergillus fumigatus (37.5%). 13 patients fulfilled the Modified ISHAM working group 2013 criteria of ABPA (‘definite’), while 15 patients were ‘possible’ ABPA and 52 did not fulfill the criteria (not ABPA). As IgE levels were not normally distributed, median levels were used in the analysis. Median total IgE levels of patients with definite and possible ABPA were 2144 kU/L and 2597 kU/L respectively (non-significant), while median specific IgE Aspergillus fumigatus at 4.35 kUA/L and 1.47 kUA/L respectively were significantly different (comparison of standard deviations F-statistic 3.2267, significance level p=0.040). Mean levels of IgG anti-Aspergillus fumigatus in the three groups (definite, possible and no evidence of ABPA) were compared using ANOVA (Statgraphics Centurion Professional XV, Statpoint Inc). Mean levels of IgG anti-Aspergillus fumigatus (Gm3) in definite ABPA was 125.17 mgA/L ( ± SD 54.84, with 95%CI 92.03-158.32), while mean Gm3 levels in possible and no ABPA were 18.61 mgA/L and 30.05 mgA/L respectively. ANOVA showed a significant difference between the definite group and the other groups (p < 0.001). This was confirmed using multiple range tests (Fisher's least significant difference procedure). There was no significant difference between the possible ABPA and not ABPA groups (p > 0.05). The study showed that a sizeable proportion of patients with asthma are sensitized to Aspergillus fumigatus in this part of India. A higher cut-off value of Gm3 ≥ 80 mgA/L provides a higher serologic specificity towards definite ABPA. Long-term studies would provide us more information if those patients with 'possible' APBA and positive Gm3 later develop clear ABPA, and are different from the Gm3 negative group in this respect. Serologic testing with clear defined cut-offs are a valuable adjunct in the diagnosis of ABPA.

Keywords: allergic bronchopulmonary aspergillosis, Aspergillus fumigatus, asthma, IgE level

Procedia PDF Downloads 187
322 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches

Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys

Abstract:

Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.

Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites

Procedia PDF Downloads 184
321 Measuring Biobased Content of Building Materials Using Carbon-14 Testing

Authors: Haley Gershon

Abstract:

The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.

Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials

Procedia PDF Downloads 146
320 Study of the Kinetics of Formation of Carboxylic Acids Using Ion Chromatography during Oxidation Induced by Rancimat of the Oleic Acid, Linoleic Acid, Linolenic Acid, and Biodiesel

Authors: Patrícia T. Souza, Marina Ansolin, Eduardo A. C. Batista, Antonio J. A. Meirelles, Matthieu Tubino

Abstract:

Lipid oxidation is a major cause of the deterioration of the quality of the biodiesel, because the waste generated damages the engines. Among the main undesirable effects are the increase of viscosity and acidity, leading to the formation of insoluble gums and sediments which cause the blockage of fuel filters. The auto-oxidation is defined as the spontaneous reaction of atmospheric oxygen with lipids. Unsaturated fatty acids are usually the components affected by such reactions. They are present as free fatty acids, fatty esters and glycerides. To determine the oxidative stability of biodiesels, through the induction period, IP, the Rancimat method is used, which allows continuous monitoring of the induced oxidation process of the samples. During the oxidation of the lipids, volatile organic acids are produced as byproducts, in addition, other byproducts, including alcohols and carbonyl compounds, may be further oxidized to carboxylic acids. By the methodology developed in this work using ion chromatography, IC, analyzing the water contained in the conductimetric vessel, were quantified organic anions of carboxylic acids in samples subjected to oxidation induced by Rancimat. The optimized chromatographic conditions were: eluent water:acetone (80:20 v/v) with 0.5 mM sulfuric acid; flow rate 0.4 mL min-1; injection volume 20 µL; eluent suppressor 20 mM LiCl; analytical curve from 1 to 400 ppm. The samples studied were methyl biodiesel from soybean oil and unsaturated fatty acids standards: oleic, linoleic and linolenic. The induced oxidation kinetics curves were constructed by analyzing the water contained in the conductimetric vessels which were removed, each one, from the Rancimat apparatus at prefixed intervals of time. About 3 g of sample were used under the conditions of 110 °C and air flow rate of 10 L h-1. The water of each conductimetric Rancimat measuring vessel, where the volatile compounds were collected, was filtered through a 0.45 µm filter and analyzed by IC. Through the kinetic data of the formation of the organic anions of carboxylic acids, the formation rates of the same were calculated. The observed order of the rates of formation of the anions was: formate >>> acetate > hexanoate > valerate for the oleic acid; formate > hexanoate > acetate > valerate for the linoleic acid; formate >>> valerate > acetate > propionate > butyrate for the linolenic acid. It is possible to suppose that propionate and butyrate are obtained mainly from linolenic acid and that hexanoate is originated from oleic and linoleic acid. For the methyl biodiesel the order of formation of anions was: formate >>> acetate > valerate > hexanoate > propionate. According to the total rate of formation these anions produced during the induced degradation of the fatty acids can be assigned the order of reactivity: linolenic acid > linoleic acid >>> oleic acid.

Keywords: anions of carboxylic acids, biodiesel, ion chromatography, oxidation

Procedia PDF Downloads 450
319 Validation of Global Ratings in Clinical Performance Assessment

Authors: S. J. Yune, S. Y. Lee, S. J. Im, B. S. Kam, S. Y. Baek

Abstract:

This study aimed to determine the reliability of clinical performance assessments, having been emphasized by ability-based education, and professors overall assessment methods. We addressed the following problems: First, we try to find out whether there is a difference in what we consider to be the main variables affecting the clinical performance test according to the evaluator’s working period and the number of evaluation experience. Second, we examined the relationship among the global rating score (G), analytic global rating score (Gc), and the sum of the analytical checklists (C). What are the main factors affecting clinical performance assessments in relation to the numbers of times the evaluator had administered evaluations and the length of their working period service? What is the relationship between overall assessment score and analytic checklist score? How does analytic global rating with 6 components in OSCE and 4 components in sub-domains (Gc) CPX: aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude overall assessment score and task-specific analytic checklist score sum (C) affect the professor’s overall global rating assessment score (G)? We studied 75 professors who attended a 2016 Bugyeoung Consortium clinical skills performances test evaluating third and fourth year medical students at the Pusan National University Medical school in South Korea (39 prof. in OSCE, 36 prof. in CPX; all consented to participate in our study). Each evaluator used 3 forms; a task-specific analytic checklist, subsequent analytic global rating scale with sub-6 domains, and overall global scale. After the evaluation, the professors responded to the questionnaire on the important factors of clinical performance assessment. The data were analyzed by frequency analysis, correlation analysis, and hierarchical regression analysis using SPSS 21.0. Their understanding of overall assessment was analyzed by dividing the subjects into groups based on experiences. As a result, they considered ‘precision’ most important in overall OSCE assessment, and ‘precise accuracy physical examination’, ‘systemic approaches to taking patient history’, and ‘diagnostic skill capability’ in overall CPX assessment. For OSCE, there was no clear difference of opinion about the main factors, but there was for CPX. Analytic global rating scale score, overall rating scale score, and analytic checklist score had meaningful mutual correlations. According to the regression analysis results, task-specific checklist score sum had the greatest effect on overall global rating. professors regarded task-specific analytic checklist total score sum as best reflecting overall OSCE test score, followed by aseptic practice, precision, systemic approach, proficiency, successfulness, and attitude on a subsequent analytic global rating scale. For CPX, subsequent analytic global rating scale score, overall global rating scale score, and task-specific checklist score had meaningful mutual correlations. These findings support explanations for validity of professors’ global rating in clinical performance assessment.

Keywords: global rating, clinical performance assessment, medical education, analytic checklist

Procedia PDF Downloads 217
318 The Photovoltaic Panel at End of Life: Experimental Study of Metals Release

Authors: M. Tammaro, S. Manzo, J. Rimauro, A. Salluzzo, S. Schiavo

Abstract:

The solar photovoltaic (PV) modules are considered to have a negligible environmental impact compared to the fossil energy. Therefore also the waste management and the corresponding potential environmental hazard needs to be considered. The case of the photovoltaic panel is unique because the time lag from the manufacturing to the decommissioning as waste usually takes 25-30 years. Then the environmental hazard associated with end life of PV panels has been largely related to their metal contents. The principal concern regards the presence of heavy metals as Cd in thin film (TF) modules or Pb and Cr in crystalline silicon (c-Si) panels. At the end of life of PV panels, these dangerous substances could be released in the environment, if special requirements for their disposal are not adopted. Nevertheless, in literature, only a few experimental study about metal emissions from silicon crystalline/thin film panels and the corresponding environmental effect are present. As part of a study funded by the Italian national consortium for the waste collection and recycling (COBAT), the present work was aimed to analyze experimentally the potential release into the environment of hazardous elements, particularly metals, from PV waste. In this paper, for the first time, eighteen releasable metals a large number of photovoltaic panels, by c-Si and TF, manufactured in the last 30 years, together with the environmental effects by a battery of ecotoxicological tests, were investigated. Leaching tests are conducted on the crushed samples of PV module. The test is conducted according to Italian and European Standard procedure for hazard assessment of the granular waste and of the sludge. The sample material is shaken for 24 hours in HDPE bottles with an overhead mixer Rotax 6.8 VELP at indoor temperature and using pure water (18 MΩ resistivity) as leaching solution. The liquid-to-solid ratio was 10 (L/S=10, i.e. 10 liters of water per kg of solid). The ecotoxicological tests were performed in the subsequent 24 hours. A battery of toxicity test with bacteria (Vibrio fisheri), algae (Pseudochirneriella subcapitata) and crustacea (Daphnia magna) was carried out on PV panel leachates obtained as previously described and immediately stored in dark and at 4°C until testing (in the next 24 hours). For understand the actual pollution load, a comparison with the current European and Italian benchmark limits was performed. The trend of leachable metal amount from panels in relation to manufacturing years was then highlighted in order to assess the environmental sustainability of PV technology over time. The experimental results were very heterogeneous and show that the photovoltaic panels could represent an environmental hazard. The experimental results showed that the amounts of some hazardous metals (Pb, Cr, Cd, Ni), for c-Si and TF, exceed the law limits and they are a clear indication of the potential environmental risk of photovoltaic panels "as a waste" without a proper management.

Keywords: photovoltaic panel, environment, ecotoxicity, metals emission

Procedia PDF Downloads 251
317 Inhibition of the Activity of Polyphenol Oxidase Enzyme Present in Annona muricata and Musa acuminata by the Experimentally Identified Natural Anti-Browning Agents

Authors: Michelle Belinda S. Weerawardana, Gobika Thiripuranathar, Priyani A. Paranagama

Abstract:

Most of fresh vegetables and fruits available in the retail markets undergo a physiological disorder in its appearance and coloration, which indeed discourages consumer purchase. A loss of millions of dollars yearly to the food industry had been due to this pronounced color reaction called Enzymatic Browning which is driven due to the catalytic activity by an oxidoreductase enzyme, polyphenol oxidase (PPO). The enzyme oxidizes the phenolic compounds which are abundantly available in fruits and vegetables as substrates into quinones, which could react with proteins in its surrounding to generate black pigments, called melanins, which are highly UV-active compounds. Annona muricata (Katu anoda) and Musa acuminata (Ash plantains) is a fruit and a vegetable consumed by Sri Lankans widely due to their high nutritional values, medicinal properties and economical importance. The objective of the present study was to evaluate and determine the effective natural anti-browning inhibitors that could prevent PPO activity in the selected fruit and vegetable. Enzyme extracts from Annona muricata (Katu anoda) and Musa acuminata (Ash plantains), were prepared by homogenizing with analytical grade acetone, and pH of each enzyme extract was maintained at 7.0 using a phosphate buffer. The extracts of inhibitors were prepared using powdered ginger rhizomes and essential oil from the bark of Cinnamomum zeylanicum. Water extracts of ginger were prepared and the essential oil from Ceylon cinnamon bark was extracted using steam distillation method. Since the essential oil is not soluble in water, 0.1µl of cinnamon bark oil was mixed with 0.1µl of Triton X-100 emulsifier and 5.00 ml of water. The effect of each inhibitor on the PPO activity was investigated using catechol (0.1 mol dm-3) as the substrate and two samples of enzyme extracts prepared. The dosages of the prepared Cinnamon bark oil, and ginger (2 samples) which were used to measure the activity were 0.0035 g/ml, 0.091 g/ml and 0.087 g/ml respectively. The measurements of the inhibitory activity were obtained at a wavelength of 525 nm using the UV-visible spectrophotometer. The results evaluated thus revealed that % inhibition observed with cinnamon bark oil, and ginger for Annona muricata was 51.97%, and 60.90% respectively. The effects of cinnamon bark oil, and ginger extract on PPO activity of Musa acuminata were 49.51%, and 48.10%. The experimental findings thus revealed that Cinnamomum zeylanicum bark oil was a more effective inhibitor for PPO enzyme present in Musa acuminata and ginger was effective for PPO enzyme present in Annona muricata. Overall both the inhibitors were proven to be more effective towards the activities of PPO enzyme present in both samples. These inhibitors can thus be corroborated as effective, natural, non-toxic, anti-browning extracts, which when added to the above fruit and vegetable will increase the shelf life and also the acceptance of the product by the consumers.

Keywords: anti-browning agent, enzymatic browning, inhibitory activity, polyphenol oxidase

Procedia PDF Downloads 252
316 Chemical Analysis of Particulate Matter (PM₂.₅) and Volatile Organic Compound Contaminants

Authors: S. Ebadzadsahraei, H. Kazemian

Abstract:

The main objective of this research was to measure particulate matter (PM₂.₅) and Volatile Organic Compound (VOCs) as two classes of air pollutants, at Prince George (PG) neighborhood in warm and cold seasons. To fulfill this objective, analytical protocols were developed for accurate sampling and measurement of the targeted air pollutants. PM₂.₅ samples were analyzed for their chemical composition (i.e., toxic trace elements) in order to assess their potential source of emission. The City of Prince George, widely known as the capital of northern British Columbia (BC), Canada, has been dealing with air pollution challenges for a long time. The city has several local industries including pulp mills, a refinery, and a couple of asphalt plants that are the primary contributors of industrial VOCs. In this research project, which is the first study of this kind in this region it measures physical and chemical properties of particulate air pollutants (PM₂.₅) at the city neighborhood. Furthermore, this study quantifies the percentage of VOCs at the city air samples. One of the outcomes of this project is updated data about PM₂.₅ and VOCs inventory in the selected neighborhoods. For examining PM₂.₅ chemical composition, an elemental analysis methodology was developed to measure major trace elements including but not limited to mercury and lead. The toxicity of inhaled particulates depends on both their physical and chemical properties; thus, an understanding of aerosol properties is essential for the evaluation of such hazards, and the treatment of such respiratory and other related diseases. Mixed cellulose ester (MCE) filters were selected for this research as a suitable filter for PM₂.₅ air sampling. Chemical analyses were conducted using Inductively Coupled Plasma Mass Spectrometry (ICP-MS) for elemental analysis. VOCs measurement of the air samples was performed using a Gas Chromatography-Flame Ionization Detector (GC-FID) and Gas Chromatography-Mass Spectrometry (GC-MS) allowing for quantitative measurement of VOC molecules in sub-ppb levels. In this study, sorbent tube (Anasorb CSC, Coconut Charcoal), 6 x 70-mm size, 2 sections, 50/100 mg sorbent, 20/40 mesh was used for VOCs air sampling followed by using solvent extraction and solid-phase micro extraction (SPME) techniques to prepare samples for measuring by a GC-MS/FID instrument. Air sampling for both PM₂.₅ and VOC were conducted in summer and winter seasons for comparison. Average concentrations of PM₂.₅ are very different between wildfire and daily samples. At wildfire time average of concentration is 83.0 μg/m³ and daily samples are 23.7 μg/m³. Also, higher concentrations of iron, nickel and manganese found at all samples and mercury element is found in some samples. It is able to stay too high doses negative effects.

Keywords: air pollutants, chemical analysis, particulate matter (PM₂.₅), volatile organic compound, VOCs

Procedia PDF Downloads 124
315 The Influence of Argumentation Strategy on Student’s Web-Based Argumentation in Different Scientific Concepts

Authors: Xinyue Jiao, Yu-Ren Lin

Abstract:

Argumentation is an essential aspect of scientific thinking which has been widely concerned in recent reform of science education. The purpose of the present studies was to explore the influences of two variables termed ‘the argumentation strategy’ and ‘the kind of science concept’ on student’s web-based argumentation. The first variable was divided into either monological (which refers to individual’s internal discourse and inner chain reasoning) or dialectical (which refers to dialogue interaction between/among people). The other one was also divided into either descriptive (i.e., macro-level concept, such as phenomenon can be observed and tested directly) or theoretical (i.e., micro-level concept which is abstract, and cannot be tested directly in nature). The present study applied the quasi-experimental design in which 138 7th grade students were invited and then assigned to either monological group (N=70) or dialectical group (N=68) randomly. An argumentation learning program called ‘the PWAL’ was developed to improve their scientific argumentation abilities, such as arguing from multiple perspectives and based on scientific evidence. There were two versions of PWAL created. For the individual version, students can propose argument only through knowledge recall and self-reflecting process. On the other hand, the students were allowed to construct arguments through peers’ communication in the collaborative version. The PWAL involved three descriptive science concept-based topics (unit 1, 3 and 5) and three theoretical concept-based topics (unit 2, 4 and 6). Three kinds of scaffoldings were embedded into the PWAL: a) argument template, which was used for constructing evidence-based argument; b) the model of the Toulmin’s TAP, which shows the structure and elements of a sound argument; c) the discussion block, which enabled the students to review what had been proposed during the argumentation. Both quantitative and qualitative data were collected and analyzed. An analytical framework for coding students’ arguments proposed in the PWAL was constructed. The results showed that the argumentation approach has a significant effect on argumentation only in theoretical topics (f(1, 136)=48.2, p < .001, η2=2.62). The post-hoc analysis showed the students in the collaborative group perform significantly better than the students in the individual group (mean difference=2.27). However, there is no significant difference between the two groups regarding their argumentation in descriptive topics. Secondly, the students made significant progress in the PWAL from the earlier descriptive or theoretical topic to the later one. The results enabled us to conclude that the PWAL was effective for students’ argumentation. And the students’ peers’ interaction was essential for students to argue scientifically especially for the theoretical topic. The follow-up qualitative analysis showed student tended to generate arguments through critical dialogue interactions in the theoretical topic which promoted them to use more critiques and to evaluate and co-construct each other’s arguments. More explanations regarding the students’ web-based argumentation and the suggestions for the development of web-based science learning were proposed in our discussions.

Keywords: argumentation, collaborative learning, scientific concepts, web-based learning

Procedia PDF Downloads 85
314 Applying Miniaturized near Infrared Technology for Commingled and Microplastic Waste Analysis

Authors: Monika Rani, Claudio Marchesi, Stefania Federici, Laura E. Depero

Abstract:

Degradation of the aquatic environment by plastic litter, especially microplastics (MPs), i.e., any water-insoluble solid plastic particle with the longest dimension in the range 1µm and 1000 µm (=1 mm) size, is an unfortunate indication of the advancement of the Anthropocene age on Earth. Microplastics formed due to natural weathering processes are termed as secondary microplastics, while when these are synthesized in industries, they are called primary microplastics. Their presence from the highest peaks to the deepest points in oceans explored and their resistance to biological and chemical decay has adversely affected the environment, especially marine life. Even though the presence of MPs in the marine environment is well-reported, a legitimate and authentic analytical technique to sample, analyze, and quantify the MPs is still under progress and testing stages. Among the characterization techniques, vibrational spectroscopic techniques are largely adopted in the field of polymers. And the ongoing miniaturization of these methods is on the way to revolutionize the plastic recycling industry. In this scenario, the capability and the feasibility of a miniaturized near-infrared (MicroNIR) spectroscopy combined with chemometrics tools for qualitative and quantitative analysis of urban plastic waste collected from a recycling plant and microplastic mixture fragmented in the lab were investigated. Based on the Resin Identification Code, 250 plastic samples were used for macroplastic analysis and to set up a library of polymers. Subsequently, MicroNIR spectra were analysed through the application of multivariate modelling. Principal Components Analysis (PCA) was used as an unsupervised tool to find trends within the data. After the exploratory PCA analysis, a supervised classification tool was applied in order to distinguish the different plastic classes, and a database containing the NIR spectra of polymers was made. For the microplastic analysis, the three most abundant polymers in the plastic litter, PE, PP, PS, were mechanically fragmented in the laboratory to micron size. The distinctive arrangement of blends of these three microplastics was prepared in line with a designed ternary composition plot. After the PCA exploratory analysis, a quantitative model Partial Least Squares Regression (PLSR) allowed to predict the percentage of microplastics in the mixtures. With a complete dataset of 63 compositions, PLS was calibrated with 42 data-points. The model was used to predict the composition of 21 unknown mixtures of the test set. The advantage of the consolidated NIR Chemometric approach lies in the quick evaluation of whether the sample is macro or micro, contaminated, coloured or not, and with no sample pre-treatment. The technique can be utilized with bigger example volumes and even considers an on-site evaluation and in this manner satisfies the need for a high-throughput strategy.

Keywords: chemometrics, microNIR, microplastics, urban plastic waste

Procedia PDF Downloads 139
313 Human Immuno-Deficiency Virus Co-Infection with Hepatitis B Virus and Baseline Cd4+ T Cell Count among Patients Attending a Tertiary Care Hospital, Nepal

Authors: Soma Kanta Baral

Abstract:

Background: Since 1981, when the first AIDS case was reported, worldwide, more than 34 million people have been infected with HIV. Almost 95 percent of the people infected with HIV live in developing countries. As HBV & HIV share similar routes of transmission by sexual intercourse or drug use by parenteral injection, co-infection is common. Because of the limited access to healthcare & HIV treatment in developing countries, HIV-infected individuals are present late for care. Enumeration of CD4+ T cell count at the time of diagnosis has been useful to initiate the therapy in HIV infected individuals. The baseline CD4+ T cell count shows high immunological variability among patients. Methods: This prospective study was done in the serology section of the Department of Microbiology over a period of one year from august 2012 to July 2013. A total of 13037 individuals subjected for HIV test were included in the study comprising of 4982 males & 8055 females. Blood sample was collected by vein puncture aseptically with standard operational procedure in clean & dry test-tube. All blood samples were screened for HIV as described by WHO algorithm by Immuno-chromatography rapid kits. Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. After informed consent, HIV positive individuals were screened for HBsAg by Immuno-chromatography rapid kits (Hepacard). Further confirmation was done by biokit ELISA method as per the manufacturer’s guidelines. EDTA blood samples were collected from the HIV sero-positive individuals for baseline CD4+ T count. Then, CD4+ T cells count was determined by using FACS Calibur Flow Cytometer (BD). Results: Among 13037 individuals screened for HIV, 104 (0.8%) were found to be infected comprising of 69(66.34%) males & 35 (33.65%) females. The study showed that the high infection was noted in housewives (28.7%), active age group (30.76%), rural area (56.7%) & in heterosexual route (80.9%) of transmission. Out of total HIV infected individuals, distribution of HBV co-infection was found to be 6(5.7%). All co- infected individuals were married, male, above the age of 25 years & heterosexual route of transmission. Baseline CD4+ T cell count of HIV infected patient was found higher (mean CD4+ T cell count; 283cells/cu.mm) than HBV co-infected patients (mean CD4+ T cell count; 91 cells/cu.mm). Majority (77.2%) of HIV infected & all co-infected individuals were presented in our center late (CD4+ T cell count;< 350/cu. mm) for diagnosis and care. Majority of co- infected 4 (80%) were late presented with advanced AIDS stage (CD4+ count; <200/cu.mm). Conclusions: The study showed a high percentage of HIV sero-positive & co- infected individuals. Baseline CD4+ T cell count of majority of HIV infected individuals was found to be low. Hence, more sustained and vigorous awareness campaigns & counseling still need to be done in order to promote early diagnosis and management.

Keywords: HIV/AIDS, HBsAg, co-infection, CD4+

Procedia PDF Downloads 195
312 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories

Authors: Rene Hellmuth

Abstract:

The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.

Keywords: augmented reality, digital factory model, factory planning, restructuring

Procedia PDF Downloads 115
311 Utilization of Fly Ash Amended Sewage Sludge as Sustainable Building Material

Authors: Kaling Taki, Rohit Gahlot, Manish Kumar

Abstract:

Disposal of Sewage Sludge (SS) is a big issue especially in developing nation like India, where there is no control in the dynamicity of SS produced. The present research work demonstrates the potential application of SS amended with varying percentage (0-100%) of Fly Ash (FA) for brick manufacturing as an alternative of SS management. SS samples were collected from Jaspur sewage treatment plant (Ahmedabad, India) and subjected to different preconditioning treatments: (i) atmospheric drying (ii) pulverization (iii) heat treatment in oven (110°C, moisture removal) and muffle furnace (440°C, organic content removal). Geotechnical parameters of the SS were obtained as liquid limit (52%), plastic limit (24%), shrinkage limit (10%), plasticity index (28%), differential free swell index (DFSI, 47%), silt (68%), clay (27%), organic content (5%), optimum moisture content (OMC, 20%), maximum dry density (MDD, 1.55gm/cc), specific gravity (2.66), swell pressure (57kPa) and unconfined compressive strength (UCS, 207kPa). For FA liquid limit, plastic limit and specific gravity was 44%, 0% and 2.2 respectively. Initially, for brick casting pulverized SS sample was heat treated in a muffle furnace around 440℃ (5 hours) for removal of organic matter. Later, mixing of SS, FA and water by weight ratio was done at OMC. 7*7*7 cm3 sample mold was used for casting bricks at MDD. Brick samples were then first dried in room temperature for 24 hours, then in oven at 100℃ (24 hours) and finally firing in muffle furnace for 1000℃ (10 hours). The fired brick samples were then cured for 3 days according to Indian Standards (IS) common burnt clay building bricks- specification (5th revision). The Compressive strength of brick samples (0, 10, 20, 30, 40, 50 ,60, 70, 80, 90, 100%) of FA were 0.45, 0.76, 1.89, 1.83, 4.02, 3.74, 3.42, 3.19, 2.87, 0.78 and 4.95MPa when evaluated through compressive testing machine (CTM) for a stress rate of 14MPa/min. The highest strength was obtained at 40% FA mixture i.e. 4.02MPa which is much higher than the pure SS brick sample. According to IS 1077: 1992 this combination gives strength more than 3.5 MPa and can be utilized as common building bricks. The loss in weight after firing was much higher than the oven treatment, this might be due to degradation temperature higher than 100℃. The thermal conductivity of the fired brick was obtained as 0.44Wm-1K-1, indicating better insulation properties than other reported studies. TCLP (Toxicity characteristic leaching procedure) test of Cr, Cu, Co, Fe and Ni in raw SS was found as 69, 70, 21, 39502 and 47 mg/kg. The study positively concludes that SS and FA at optimum ratio can be utilized as common building bricks such as partitioning wall and other small strength requirement works. The uniqueness of the work is it emphasizes on utilization of FA for stabilizing SS as construction material as a replacement of natural clay as reported in existing studies.

Keywords: Compressive strength, Curing, Fly Ash, Sewage Sludge.

Procedia PDF Downloads 90
310 Preliminary Design, Production and Characterization of a Coral and Alginate Composite for Bone Engineering

Authors: Sthephanie A. Colmenares, Fabio A. Rojas, Pablo A. Arbeláez, Johann F. Osma, Diana Narvaez

Abstract:

The loss of functional tissue is a ubiquitous and expensive health care problem, with very limited treatment options for these patients. The golden standard for large bone damage is a cadaveric bone as an allograft with stainless steel support; however, this solution only applies to bones with simple morphologies (long bones), has a limited material supply and presents long term problems regarding mechanical strength, integration, differentiation and induction of native bone tissue. Therefore, the fabrication of a scaffold with biological, physical and chemical properties similar to the human bone with a fabrication method for morphology manipulation is the focus of this investigation. Towards this goal, an alginate and coral matrix was created using two production techniques; the coral was chosen because of its chemical composition and the alginate due to its compatibility and mechanical properties. In order to construct the coral alginate scaffold the following methodology was employed; cleaning of the coral, its pulverization, scaffold fabrication and finally the mechanical and biological characterization. The experimental design had: mill method and proportion of alginate and coral, as the two factors, with two and three levels each, using 5 replicates. The coral was cleaned with sodium hypochlorite and hydrogen peroxide in an ultrasonic bath. Then, it was milled with both a horizontal and a ball mill in order to evaluate the morphology of the particles obtained. After this, using a combination of alginate and coral powder and water as a binder, scaffolds of 1cm3 were printed with a SpectrumTM Z510 3D printer. This resulted in solid cubes that were resistant to small compression stress. Then, using a ESQUIM DP-143 silicon mold, constructs used for the mechanical and biological assays were made. An INSTRON 2267® was implemented for the compression tests; the density and porosity were calculated with an analytical balance and the biological tests were performed using cell cultures with VERO fibroblast, and Scanning Electron Microscope (SEM) as visualization tool. The Young’s moduli were dependent of the pulverization method, the proportion of coral and alginate and the interaction between these factors. The maximum value was 5,4MPa for the 50/50 proportion of alginate and horizontally milled coral. The biological assay showed more extracellular matrix in the scaffolds consisting of more alginate and less coral. The density and porosity were proportional to the amount of coral in the powder mix. These results showed that this composite has potential as a biomaterial, but its behavior is elastic with a small Young’s Modulus, which leads to the conclusion that the application may not be for long bones but for tissues similar to cartilage.

Keywords: alginate, biomaterial, bone engineering, coral, Porites asteroids, SEM

Procedia PDF Downloads 240
309 Intercultural Strategies of Chinese Composers in the Organizational Structure of Their Works

Authors: Bingqing Chen

Abstract:

The Opium War unlocked the gate of China. Since then, modern western culture has been imported strongly and spread throughout this Asian country. The monologue of traditional Chinese culture in the past has been replaced by the hustle and bustle of multiculturalism. In the field of music, starting from school music, China, a country without the concept of composition, was deeply influenced by western culture and professional music composition, and entered the era of professional music composition. Recognizing the importance of national culture, a group of insightful artists began to try to add ‘China’ to musical composition. However, due to the special historical origin of Chinese professional musical composition and the three times of cultural nihilism in China, professional musical composition at this time failed to interpret the deep language structure of local culture within Chinese traditional culture, but only regarded Chinese traditional music as a ‘melody material library.’ At this time, the cross-cultural composition still takes Western music as its ‘norm,’ while our own music culture only exists as the sound of the contrast of Western music. However, after reading scores extensively, watching video performances, and interviewing several active composers, we found that at least in the past 30 years, China has created some works that can be called intercultural music. In these kinds of music, composers put Chinese and Western, traditional and modern in an almost equal position to have a dialogue based on their deep understanding and respect for the two cultures. This kind of music connects two music worlds, and links the two cultural and ideological worlds behind it, and communicates and grows together. This paper chose the works of three composers with different educational backgrounds, and pay attention to how composers can make a dialogue at the organizational structure level of their works. Based on the strategies adopted by composers in structuring their works, this paper expounds on how the composer's music procedure shows intercultural in terms of whole sound effects and cultural symbols. By actively participating in this intercultural practice, composers resorting to various musical and extra-musical procedures to arrive at the so-called ‘innovation within tradition.’ Through the dialogue, we can activate the space of creative thinking and explore the potential contained in culture. This interdisciplinary research promotes the rethinking of the possibility of innovation in contemporary Chinese intercultural music composition, spanning the fields of sound studies, dialogue theory, cultural research, music theory, and so on. Recently, China is calling for actively promoting 'the construction of Chinese music canonization,’ expecting to form a particular music style to show national-cultural identity. In the era of globalization, it is possible to form a brand-new Chinese music style through intercultural composition, but it is a question about talents, and the key lies in how composers do it. There is no recipe for the formation of the Chinese music style, only the composers constantly trying and tries to solve problems in their works.

Keywords: dialogism, intercultural music, national-cultural identity, organization/structure, sound

Procedia PDF Downloads 95
308 Religiosity and Involvement in Purchasing Convenience Foods: Using Two-Step Cluster Analysis to Identify Heterogenous Muslim Consumers in the UK

Authors: Aisha Ijaz

Abstract:

The paper focuses on the impact of Muslim religiosity on convenience food purchases and involvement experienced in a non-Muslim culture. There is a scarcity of research on the purchasing patterns of Muslim diaspora communities residing in risk societies, particularly in contexts where there is an increasing inclination toward industrialized food items alongside a renewed interest in the concept of natural foods. The United Kingdom serves as an appropriate setting for this study due to the increasing Muslim population in the country, paralleled by the expanding Halal Food Market. A multi-dimensional framework is proposed, testing for five forms of involvement, specifically Purchase Decision Involvement, Product Involvement, Behavioural Involvement, Intrinsic Risk and Extrinsic Risk. Quantitative cross-sectional consumer data were collected through a face-to-face survey contact method with 141 Muslims during the summer of 2020 in Liverpool located in the Northwest of England. proportion formula was utilitsed, and the population of interest was stratified by gender and age before recruitment took place through local mosques and community centers. Six input variables were used (intrinsic religiosity and involvement dimensions), dividing the sample into 4 clusters using the Two-Step Cluster Analysis procedure in SPSS. Nuanced variances were observed in the type of involvement experienced by religiosity group, which influences behaviour when purchasing convenience food. Four distinct market segments were identified: highly religious ego-involving (39.7%), less religious active (26.2%), highly religious unaware (16.3%), less religious concerned (17.7%). These segments differ significantly with respects to their involvement, behavioural variables (place of purchase and information sources used), socio-cultural (acculturation and social class), and individual characteristics. Choosing the appropriate convenience food is centrally related to the value system of highly religious ego-involving first-generation Muslims, which explains their preference for shopping at ethnic food stores. Less religious active consumers are older and highly alert in information processing to make the optimal food choice, relying heavily on product label sources. Highly religious unaware Muslims are less dietary acculturated to the UK diet and tend to rely on digital and expert advice sources. The less-religious concerned segment, who are typified by younger age and third generation, are engaged with the purchase process because they are worried about making unsuitable food choices. Research implications are outlined and potential avenues for further explorations are identified.

Keywords: consumer behaviour, consumption, convenience food, religion, muslims, UK

Procedia PDF Downloads 34
307 Exploring Artistic Creation and Autoethnography in the Spatial Context of Geography

Authors: Sinem Tas

Abstract:

This research paper attempts to study the perspective of personal experience in relation to spatial dynamics and artistic outcomes within the realm of cultural identity. This article serves as a partial analysis within a broader PhD investigation that focuses on the cultural dynamics and political structures behind cultural identity through an autoethnography of narrative while presenting its correlation with artistic creation in the context of space and people. Focusing on the artistic/creative practice project AUTRUI, the primary goal is to analyse and understand the influence of personal experiences and culturally constructed identity as an artist in resulting in the compositional modality of the last image considering self-reflective experience. Referencing the works of Joyce Davidson and Christine Milligan - the scholars who emphasise the importance of emotion and spatial experience in geographical studies contribute to this work as they highlight the significance of emotion across various spatial scales in their work Embodying Emotion Sensing Space: Introducing Emotional Geographies (2004). Their perspective suggests that understanding emotions within different spatial contexts is crucial for comprehending human experiences and interactions with space. Incorporating the insights of scholars like Yi-Fu Tuan, particularly his seminal work Space and Place: The Perspective of Experience (1979), is important for creating an in-depth frame of geographical experience. Tuan's humanistic perspective on space and place provides a valuable theoretical framework for understanding the interplay between personal experiences and spatial contexts. A substantial contextualisation of the geopolitics of Turkey - the implications for national identity and cohesion - will be addressed by drawing an outline of the political and geographical frame as a methodological strategy to understand the dynamics behind this research. Besides the bibliographical reading, the methods used to study this relation are participatory observation, memory work along with memoir analysis, personal interviews, and discussion of photographs and news. The utilisation of the self as data requires the analysis of the written sources with personal engagement. By delving into written sources such as written communications or diaries as well as memoirs, the research gains a firsthand perspective, enriching the analytical depth of the study. Furthermore, the examination of photography and news articles serves as a valuable means of contextualising experiences from a journalist's background within specific geographical settings. The inclusion of interviews with close family members access provides firsthand perspectives and intimate insights rooted in shared experiences within similar geographical contexts, offering complementary insights and diversified viewpoints, enhancing the comprehensiveness of the investigation.

Keywords: art, autoethnography, place and space, Turkey

Procedia PDF Downloads 32
306 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 55
305 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 510
304 Hampering The 'Right to Know': Consequences of the Excessive Interpretation of the Notion of Exemption from the Right to Information

Authors: Tomasz Lewinski

Abstract:

The right to know becomes gradually recognised as an increasing number of states adopts national legislations regarding access to state-held information. Laws differ from each other in the scope of the right to information (hereinafter: RTI). In all regimes of RTI, there are exceptions from the general notion of the right. States’ authorities too often use exceptions to justify refusals to requests for state-held information. This paper sets out how states hamper RTI basing on the notion of exception and by not providing an effective procedure that could redress unlawful denials. This paper bases on two selected examples of RTI incorporation into the national legal regime, United Kingdom, and South Africa. It succinctly outlines the international standard given in Article 19 of the International Covenant on Civil and Political Rights (hereinafter: ICCPR) and its influence on the RTI in selected countries. It shortly demonstrates as a background to further analysis the Human Rights Committee’s jurisprudence and standards articulated by successive Special Rapporteurs on freedom of opinion and expression. Subsequently, it presents a brief comparison of these standards with the regional standards, namely the African Charter on Human and Peoples' Rights and the European Convention on Human Rights. It critically discusses the regimes of exceptions in RTI legislations in respective national laws. It shows how excessive these regimes are, what implications they have for the transparency in general. Also, the objective is to divide exceptions enumerated in legislations of selected states in relation to exceptions provided in Article 19 of the ICCPR. Basing on the established division of exceptions by its natures, it compares both regimes of exceptions related to the principle of national security. That is to compare jurisprudence of domestic courts, and overview practices of states’ authorities applied to RTI requests. The paper evaluates remedies available in legislations, including contexts of the length and costs of the subsequent proceedings. This provides a general assessment of the given mechanisms and present potential risks of its ineffectiveness. The paper relies on examination of the national legislations, comments of the credible non-governmental organisations (e.g. The Public's Right to Know Principles on Freedom of Information Legislation by the Article 19, The Tshwane Principles on National Security and the Right to Information), academics and also the research of the relevant judgements delivered by domestic and international courts. Conclusion assesses whether selected countries’ legislations go in line with international law and trends, whether the jurisprudence of the regional courts provide appropriate benchmarks for national courts to address RTI issues effectively. Furthermore, it identifies the largest disadvantages of current legislations and to what outcomes it leads in domestic courts jurisprudences. In the end, it provides recommendations and policy arguments for states to improve transparency and support local organisations in their endeavours to establish more transparent states and societies.

Keywords: access to information, freedom of information, national security, right to know, transparency

Procedia PDF Downloads 194
303 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder

Authors: Kseniya Gladun

Abstract:

Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").

Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography

Procedia PDF Downloads 231
302 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 352
301 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 157
300 Recycling Service Strategy by Considering Demand-Supply Interaction

Authors: Hui-Chieh Li

Abstract:

Circular economy promotes greater resource productivity and avoids pollution through greater recycling and re-use which bring benefits for both the environment and the economy. The concept is contrast to a linear economy which is ‘take, make, dispose’ model of production. A well-design reverse logistics service strategy could enhance the willingness of recycling of the users and reduce the related logistics cost as well as carbon emissions. Moreover, the recycle brings the manufacturers most advantages as it targets components for closed-loop reuse, essentially converting materials and components from worn-out product into inputs for new ones at right time and right place. This study considers demand-supply interaction, time-dependent recycle demand, time-dependent surplus value of recycled product and constructs models on recycle service strategy for the recyclable waste collector. A crucial factor in optimizing a recycle service strategy is consumer demand. The study considers the relationships between consumer demand towards recycle and product characteristics, surplus value and user behavior. The study proposes a recycle service strategy which differs significantly from the conventional and typical uniform service strategy. Periods with considerable demand and large surplus product value suggest frequent and short service cycle. The study explores how to determine a recycle service strategy for recyclable waste collector in terms of service cycle frequency and duration and vehicle type for all service cycles by considering surplus value of recycled product, time-dependent demand, transportation economies and demand-supply interaction. The recyclable waste collector is responsible for the collection of waste product for the manufacturer. The study also examines the impacts of utilization rate on the cost and profit in the context of different sizes of vehicles. The model applies mathematical programming methods and attempts to maximize the total profit of the distributor during the study period. This study applies the binary logit model, analytical model and mathematical programming methods to the problem. The model specifically explores how to determine a recycle service strategy for the recycler by considering product surplus value, time-dependent recycle demand, transportation economies and demand-supply interaction. The model applies mathematical programming methods and attempts to minimize the total logistics cost of the recycler and maximize the recycle benefits of the manufacturer during the study period. The study relaxes the constant demand assumption and examines how service strategy affects consumer demand towards waste recycling. Results of the study not only help understanding how the user demand for recycle service and product surplus value affects the logistics cost and manufacturer’s benefits, but also provide guidance such as award bonus and carbon emission regulations for the government.

Keywords: circular economy, consumer demand, product surplus value, recycle service strategy

Procedia PDF Downloads 376
299 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 70
298 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 129
297 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 109
296 The Involvement of the Homing Receptors CCR7 and CD62L in the Pathogenesis of Graft-Versus-Host Disease

Authors: Federico Herrera, Valle Gomez García de Soria, Itxaso Portero Sainz, Carlos Fernández Arandojo, Mercedes Royg, Ana Marcos Jimenez, Anna Kreutzman, Cecilia MuñozCalleja

Abstract:

Introduction: Graft-versus-host disease (GVHD) still remains the major complication associated with allogeneic stem cell transplantation (SCT). The pathogenesis involves migration of donor naïve T-cells into recipient secondary lymphoid organs. Two molecules are important in this process: CD62L and CCR7, which are characteristically expressed in naïve/central memory T-cells. With this background, we aimed to study the influence of CCR7 and CD62L on donor lymphocytes in the development and severity of GVHD. Material and methods: This single center study included 98 donor-recipient pairs. Samples were collected prospectively from the apheresis product and phenotyped by flow cytometry. CCR7 and CD62L expression in CD4+ and CD8+ T-cells were compared between patients who developed acute (n=40) or chronic GVHD (n=33) and those who did not (n=38). Results: The patients who developed acute GVHD were transplanted with a higher percentage of CCR7+CD4+ T-cells (p = 0.05) compared to the no GVHD group. These results were confirmed when these patients were divided in degrees according to the severity of the disease; the more severe disease, the higher percentage of CCR7+CD4+ T-cells. Conversely, chronic GVHD patients received a higher percentage of CCR7+CD8+ T-cells (p=0.02) in comparison to those who did not develop the complication. These data were also confirmed when patients were subdivided in degrees of the disease severity. A multivariable analysis confirmed that percentage of CCR7+CD4+ T-cells is a predictive factor of acute GVHD whereas the percentage of CCR7+CD8+ T-cells is a predictive factor of chronic GVHD. In vitro functional assays (migration and activation assays) supported the idea of CCR7+ T-cells were involved in the development of GVHD. As low levels of CD62L expression were detected in all apheresis products, we tested the hypothesis that CD62L was shed during apheresis procedure. Comparing CD62L surface levels in T-cells from the same donor immediately before collecting the apheresis product, and the final apheresis product we found that this process down-regulated CD62L in both CD4+ and CD8+ T cells (p=0.008). Interestingly, when CD62L levels were analysed in days 30 or 60 after engraftment, they recovered to baseline (p=0.008). However, to investigate the relation between CD62L expression and the development of GVHD in the recipient samples after the engraftment, no differences were observed comparing patients with GVHD to those who did not develop the disease. Discussion: Our prospective study indicates that the CCR7+ T-cells from the donor, which include naïve and central memory T-cells, contain the alloreactive cells with a high ability to mediate GVHD (in the case of both migration and activation). Therefore we suggest that the proportion and functional properties of CCR7+CD4+ and CCR7+CD8+ T-cells in the apheresis could act as a predictive biomarker to both acute and chronic GVHD respectively. Importantly, our study precludes that CD62L is lost in the apheresis and therefore it is not a reliable biomarker for the development of GVHD.

Keywords: CCR7, CD62L, GVHD, SCT

Procedia PDF Downloads 266
295 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts

Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira

Abstract:

In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.

Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design

Procedia PDF Downloads 99
294 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”

Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani

Abstract:

The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.

Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density

Procedia PDF Downloads 110