Search results for: split air conditioner
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 449

Search results for: split air conditioner

89 Design-Based Elements to Sustain Participant Activity in Massive Open Online Courses: A Case Study

Authors: C. Zimmermann, E. Lackner, M. Ebner

Abstract:

Massive Open Online Courses (MOOCs) are increasingly popular learning hubs that are boasting considerable participant numbers, innovative technical features, and a multitude of instructional resources. Still, there is a high level of evidence showing that almost all MOOCs suffer from a declining frequency of participant activity and fairly low completion rates. In this paper, we would like to share the lessons learned in implementing several design patterns that have been suggested in order to foster participant activity. Our conclusions are based on experiences with the ‘Dr. Internet’ MOOC, which was created as an xMOOC to raise awareness for a more critical approach to online health information: participants had to diagnose medical case studies. There is a growing body of recommendations (based on Learning Analytics results from earlier xMOOCs) as to how the decline in participant activity can be alleviated. One promising focus in this regard is instructional design patterns, since they have a tremendous influence on the learner’s motivation, which in turn is a crucial trigger of learning processes. Since Medieval Age storytelling, micro-learning units and specific comprehensible, narrative structures were chosen to animate the audience to follow narration. Hence, MOOC participants are not likely to abandon a course or information channel when their curiosity is kept at a continuously high level. Critical aspects that warrant consideration in this regard include shorter course duration, a narrative structure with suspense peaks (according to the ‘storytelling’ approach), and a course schedule that is diversified and stimulating, yet easy to follow. All of these criteria have been observed within the design of the Dr. Internet MOOC: 1) the standard eight week course duration was shortened down to six weeks, 2) all six case studies had a special quiz format and a corresponding resolution video which was made available in the subsequent week, 3) two out of six case studies were split up in serial video sequences to be presented over the span of two weeks, and 4) the videos were generally scheduled in a less predictable sequence. However, the statistical results from the first run of the MOOC do not indicate any strong influences on the retention rate, so we conclude with some suggestions as to why this might be and what aspects need further consideration.

Keywords: case study, Dr. internet, experience, MOOCs, design patterns

Procedia PDF Downloads 237
88 Evaluation of Buckwheat Genotypes to Different Planting Geometries and Fertility Levels in Northern Transition Zone of Karnataka

Authors: U. K. Hulihalli, Shantveerayya

Abstract:

Buckwheat (Fagopyrum esculentum Moench) is an annual crop belongs to family Poligonaceae. The cultivated buckwheat species are notable for their exceptional nutritive values. It is an important source of carbohydrates, fibre, macro, and microelements such as K, Ca, Mg, Na and Mn, Zn, Se, and Cu. It also contains rutin, flavonoids, riboflavin, pyridoxine and many amino acids which have beneficial effects on human health, including lowering both blood lipid and sugar levels. Rutin, quercetin and some other polyphenols are potent carcinogens against colon and other cancers. Buckwheat has significant nutritive value and plenty of uses. Cultivation of buckwheat in Sothern part of India is very meager. Hence, a study was planned with an objective to know the performance of buckwheat genotypes to different planting geometries and fertility levels. The field experiment was conducted at Main Agriculture Research Station, University of Agriculture Sciences, Dharwad, India, during 2017 Kharif. The experiment was laid-out in split-plot design with three replications having three planting geometries as main plots, two genotypes as sub plots and three fertility levels as sub-sub plot treatments. The soil of the experimental site was vertisol. The standard procedures are followed to record the observations. The planting geometry of 30*10 cm was recorded significantly higher seed yield (893 kg/ha⁻¹), stover yield (1507 kg ha⁻¹), clusters plant⁻¹ (7.4), seeds clusters⁻¹ (7.9) and 1000 seed weight (26.1 g) as compared to 40*10 cm and 20*10 cm planting geometries. Between the genotypes, significantly higher seed yield (943 kg ha⁻¹) and harvest index (45.1) was observed with genotype IC-79147 as compared to PRB-1 genotype (687 kg ha⁻¹ and 34.2, respectively). However, the genotype PRB-1 recorded significantly higher stover yield (1344 kg ha⁻¹) as compared to genotype IC-79147 (1173 kg ha⁻¹). The genotype IC-79147 was recorded significantly higher clusters plant⁻¹ (7.1), seeds clusters⁻¹ (7.9) and 1000 seed weight (24.5 g) as compared PRB-1 (5.4, 5.8 and 22.3 g, respectively). Among the fertility levels tried, the fertility level of 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (845 kg ha-1) and stover yield (1359 kg ha⁻¹) as compared to 40:20 NP kg ha-1 (808 and 1259 kg ha⁻¹ respectively) and 20:10 NP kg ha-1 (793 and 1144 kg ha⁻¹ respectively). Within the treatment combinations, IC 79147 genotype having 30*10 cm planting geometry with 60:30 NP kg ha⁻¹ recorded significantly higher seed yield (1070 kg ha⁻¹), clusters plant⁻¹ (10.3), seeds clusters⁻¹ (9.9) and 1000 seed weight (27.3 g) compared to other treatment combinations.

Keywords: buckwheat, planting geometry, genotypes, fertility levels

Procedia PDF Downloads 148
87 Relatively High Heart-Rate Variability Predicts Greater Survival Chances in Patients with Covid-19

Authors: Yori Gidron, Maartje Mol, Norbert Foudraine, Frits Van Osch, Joop Van Den Bergh, Moshe Farchi, Maud Straus

Abstract:

Background: The worldwide pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-COV2), which began in 2019, also known as Covid-19, has infected over 136 million people and tragically took the lives of over 2.9 million people worldwide. Many of the complications and deaths are predicted by the inflammatory “cytokine storm.” One way to progress in the prevention of death is by finding a predictive and protective factor that inhibits inflammation, on the one hand, and which also increases anti-viral immunity on the other hand. The vagal nerve does precisely both actions. This study examined whether vagal nerve activity, indexed by heart-rate variability (HRV), predicts survival in patients with Covid-19. Method: We performed a pseudo-prospective study, where we retroactively obtained ECGs of 271 Covid-19 patients arriving at a large regional hospital in The Netherlands. HRV was indexed by the standard deviation of the intervals between normal heartbeats (SDNN). We examined patients’ survival at 3 weeks and took into account multiple confounders and known prognostic factors (e.g., age, heart disease, diabetes, hypertension). Results: Patients’ mean age was 68 (range: 25-95) and nearly 22% of the patients had died by 3 weeks. Their mean SDNN (17.47msec) was far below the norm (50msec). Importantly, relatively higher HRV significantly predicted a higher chance of survival, after statistically controlling for patients’ age, cardiac diseases, hypertension and diabetes (relative risk, H.R, and 95% confidence interval (95%CI): H.R = 0.49, 95%CI: 0.26 – 0.95, p < 0.05). However, since HRV declines rapidly with age and since age is a profound predictor in Covid-19, we split the sample by median age (40). Subsequently, we found that higher HRV significantly predicted greater survival in patients older than 70 (H.R = 0.35, 95%CI: 0.16 – 0.78, p = 0.01), but HRV did not predict survival in patients below age 70 years (H.R = 1.11, 95%CI: 0.37 – 3.28, p > 0.05). Conclusions: To the best of our knowledge, this is the first study showing that higher vagal nerve activity, as indexed by HRV, is an independent predictor of higher chances for survival in Covid-19. The results are in line with the protective role of the vagal nerve in diseases and extend this to a severe infectious illness. Studies should replicate these findings and then test in controlled trials whether activating the vagus nerve may prevent mortality in Covid-19.

Keywords: Covid-19, heart-rate Variability, prognosis, survival, vagal nerve

Procedia PDF Downloads 153
86 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 44
85 Use of Shipping Containers as Office Buildings in Brazil: Thermal and Energy Performance for Different Constructive Options and Climate Zones

Authors: Lucas Caldas, Pablo Paulse, Karla Hora

Abstract:

Shipping containers are present in different Brazilian cities, firstly used for transportation purposes, but which become waste materials and an environmental burden in their end-of-life cycle. In the last decade, in Brazil, some buildings made partly or totally from shipping containers started to appear, most of them for commercial and office uses. Although the use of a reused container for buildings seems a sustainable solution, it is very important to measure the thermal and energy aspects when they are used as such. In this context, this study aims to evaluate the thermal and energy performance of an office building totally made from a 12-meter-long, High Cube 40’ shipping container in different Brazilian Bioclimatic Zones. Four different constructive solutions, mostly used in Brazil were chosen: (1) container without any covering; (2) with internally insulated drywall; (3) with external fiber cement boards; (4) with both drywall and fiber cement boards. For this, the DesignBuilder with EnergyPlus was used for the computational simulation in 8760 hours. The EnergyPlus Weather File (EPW) data of six Brazilian capital cities were considered: Curitiba, Sao Paulo, Brasilia, Campo Grande, Teresina and Rio de Janeiro. Air conditioning appliance (split) was adopted for the conditioned area and the cooling setpoint was fixed at 25°C. The coefficient of performance (CoP) of air conditioning equipment was set as 3.3. Three kinds of solar absorptances were verified: 0.3, 0.6 and 0.9 of exterior layer. The building in Teresina presented the highest level of energy consumption, while the one in Curitiba presented the lowest, with a wide range of differences in results. The constructive option of external fiber cement and drywall presented the best results, although the differences were not significant compared to the solution using just drywall. The choice of absorptance showed a great impact in energy consumption, mainly compared to the case of containers without any covering and for use in the hottest cities: Teresina, Rio de Janeiro, and Campo Grande. This study brings as the main contribution the discussion of constructive aspects for design guidelines for more energy-efficient container buildings, considering local climate differences, and helps the dissemination of this cleaner constructive practice in the Brazilian building sector.

Keywords: bioclimatic zones, Brazil, shipping containers, thermal and energy performance

Procedia PDF Downloads 144
84 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 107
83 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs

Authors: Osamede Asowata, Christo Pienaar, Johan Bekker

Abstract:

Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.

Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter

Procedia PDF Downloads 101
82 Alveolar Ridge Preservation in Post-extraction Sockets Using Concentrated Growth Factors: A Split-Mouth, Randomized, Controlled Clinical Trial

Authors: Sadam Elayah

Abstract:

Background: One of the most critical competencies in advanced dentistry is alveolar ridge preservation after exodontia. The aim of this clinical trial was to assess the impact of autologous concentrated growth factor (CGF) as a socket-filling material and its ridge preservation properties following the lower third molar extraction. Materials and Methods: A total of 60 sides of 30 participants who had completely symmetrical bilateral impacted lower third molars were enrolled. The short-term outcome variables were wound healing, swelling and pain, clinically assessed at different time intervals (1st, 3rd & 7th days). While the long-term outcome variables were bone height & width, bone density and socket surface area in the coronal section. Cone beam computed tomography images were obtained immediately after surgery and three months after surgery as a temporal measure. Randomization was achieved by opaque, sealed envelopes. Follow-up data were compared to baseline using Paired & Unpaired t-tests. Results: The wound healing index was significantly better in the test sides (P =0.001). Regarding the facial swelling, the test sides had significantly fewer values than the control sides, particularly on the 1st (1.01±.57 vs 1.55 ±.56) and 3rd days (1.42±0.8 vs 2.63±1.2) postoperatively. Nonetheless, the swelling disappeared within the 7th day on both sides. The pain scores of the visual analog scale were not a statistically significant difference between both sides on the 1st day; meanwhile, the pain scores were significantly lower on the test sides compared with the control sides, especially on the 3rd (P=0.001) and 7th days (P˂0.001) postoperatively. Regarding long-term outcomes, CGF sites had higher values in height and width when compared to Control sites (Buccal wall 32.9±3.5 vs 29.4±4.3 mm, Lingual wall 25.4±3.5 vs 23.1±4 mm, and Alveolar bone width 21.07±1.55vs19.53±1.90 mm) respectively. Bone density showed significantly higher values in CGF sites than in control sites (Coronal half 200±127.3 vs -84.1±121.3, Apical half 406.5±103 vs 64.2±158.6) respectively. There was a significant difference between both sites in reducing periodontal pockets. Conclusion: CGF application following surgical extraction provides an easy, low-cost, and efficient option for alveolar ridge preservation. Thus, dentists may encourage using CGF during dental extractions, particularly when alveolar ridge preservation is required.

Keywords: platelet, extraction, impacted teeth, alveolar ridge, regeneration, CGF

Procedia PDF Downloads 46
81 Modelling High Strain Rate Tear Open Behavior of a Bilaminate Consisting of Foam and Plastic Skin Considering Tensile Failure and Compression

Authors: Laura Pytel, Georg Baumann, Gregor Gstrein, Corina Klug

Abstract:

Premium cars often coat the instrument panels with a bilaminate consisting of a soft foam and a plastic skin. The coating is torn open during the passenger airbag deployment under high strain rates. Characterizing and simulating the top coat layer is crucial for predicting the attenuation that delays the airbag deployment, effecting the design of the restrain system and to reduce the demand of simulation adjustments through expensive physical component testing.Up to now, bilaminates used within cars either have been modelled by using a two-dimensional shell formulation for the whole coating system as one which misses out the interaction of the two layers or by combining a three-dimensional formulation foam layer with a two-dimensional skin layer but omitting the foam in the significant parts like the expected tear line area and the hinge where high compression is expected. In both cases, the properties of the coating causing the attenuation are not considered. Further, at present, the availability of material information, as there are failure dependencies of the two layers, as well as the strain rate of up to 200 1/s, are insufficient. The velocity of the passenger airbag flap during an airbag shot has been measured with about 11.5 m/s during first ripping; the digital image correlation evaluation showed resulting strain rates of above 1500 1/s. This paper provides a high strain rate material characterization of a bilaminate consisting of a thin polypropylene foam and a thermoplasctic olefins (TPO) skin and the creation of validated material models. With the help of a Split Hopkinson tension bar, strain rates of 1500 1/s were within reach. The experimental data was used to calibrate and validate a more physical modelling approach of the forced ripping of the bilaminate. In the presented model, the three-dimensional foam layer is continuously tied to the two-dimensional skin layer, allowing failure in both layers at any possible position. The simulation results show a higher agreement in terms of the trajectory of the flaps and its velocity during ripping. The resulting attenuation of the airbag deployment measured by the contact force between airbag and flaps increases and serves usable data for dimensioning modules of an airbag system.

Keywords: bilaminate ripping behavior, High strain rate material characterization and modelling, induced material failure, TPO and foam

Procedia PDF Downloads 51
80 Effect of Planting Date on Quantitative and Qualitative Characteristics of Different Bread Wheat and Durum Cultivars

Authors: Mahdi Nasiri Tabrizi, A. Dadkhah, M. Khirkhah

Abstract:

In order to study the effect of planting on yield, yield components and quality traits in bread and durum wheat varieties, a field split-plot experiment based on complete randomized design with three replications was conducted in Agricultural and Natural Resources Research Center of Razavi Khorasan located in city of Mashhad during 2013-2014. Main factor were consisted of five sowing dates (first October, fifteenth December, first March, tenth March, twentieth March) and as sub-factors consisted of different bread wheat (Bahar, Pishgam, Pishtaz, Mihan, Falat and Karim) and two durum wheat (Dena and Dehdasht). According to results of analysis variance the effect of planting date was significant on all examined traits (grain yield, biological yield, harvest index, number of grain per spike, thousands kernel weight, number of spike per square meter, plant height, the number of days to heading, the number of days to maturity, during the grain filling period, percentage of wet gluten, percentage of dry gluten, gluten index, percentage of protein). By delay in planting, majority of traits significantly decreased, except quality traits (percentage of wet gluten, percentage of dry gluten and percentage of protein). Results of means comparison showed, among planting date the highest grain yield and biological yield were related to first planting date (Octobr) with mean of production of 5/6 and 1/17 tons per hectare respectively and the highest bread quality (gluten index) with mean of 85 and percentage of protein with mean of 13% to fifth planting date also the effect of genotype was significant on all traits. The highest grain yield among of studied wheat genotypes was related to Dehdasht cultivar with an average production of 4.4 tons per hectare. The highest protein percentage and bread quality (gluten index) were related to Dehdasht cultivar with 13.4% and Falat cultivar with number of 90 respectively. The interaction between cultivar and planting date was significant on all traits and different varieties had different trend for these traits. The highest grain yield was related to first planting date (October) and Falat cultivar with an average of production of 6/7 tons per hectare while in grain yield did not show a significant different with Pishtas and Mihan cultivars also the most of gluten index (bread quality index) and protein percentage was belonged to the third planting date and Karim cultivar with 7.98 and Dena cultivar with 7.14% respectively.

Keywords: yield component, yield, planting date, cultivar, quality traits, wheat

Procedia PDF Downloads 404
79 Reconstructing the Segmental System of Proto-Graeco-Phrygian: a Bottom-Up Approach

Authors: Aljoša Šorgo

Abstract:

Recent scholarship on Phrygian has begun to more closely examine the long-held belief that Greek and Phrygian are two very closely related languages. It is now clear that Graeco-Phrygian can be firmly postulated as a subclade of the Indo-European languages. The present paper will focus on the reconstruction of the phonological and phonetic segments of Proto-Graeco-Phrygian (= PGPh.) by providing relevant correspondence sets and reconstructing the classes of segments. The PGPh. basic vowel system consisted of ten phonemic oral vowels: */a e o ā ē ī ō ū/. The correspondences of the vowels are clear and leave little open to ambiguity. There were four resonants and two semi-vowels in PGPh.: */r l m n i̯ u̯/, which could appear in both a consonantal and a syllabic function, with the distribution between the two still being phonotactically predictable. Of note is the fact that the segments *m and *n seem to have merged when their phonotactic position would see them used in a syllabic function. Whether the segment resulting from this merger was a nasalized vowel (most likely *[ã]) or a syllabic nasal *[N̥] (underspecified for place of articulation) cannot be determined at this stage. There were three fricatives in PGPh.: */s h ç/. *s and *h are easily identifiable. The existence of *ç, which may seem unexpected, is postulated on the basis of the correspondence Gr. ὄς ~ Phr. yos/ιος. It is of note that Bozzone has previously proposed the existence of *ç ( < PIE *h₁i̯-) in an early stage of Greek even without taking into account Phrygian data. Finally, the system of stops in PGPh. distinguished four places of articulation (labial, dental, velar, and labiovelar) and three phonation types. The question of which three phonation types were actually present in PGPh. is one of great importance for the ongoing debate on the realization of the three series in PIE. Since the matter is still very much in dispute, we ought to, at this stage, endeavour to reconstruct the PGPh. system without recourse to the other IE languages. The three series of correspondences are: 1. Gr. T (= tenuis) ~ Phr. T; 2. Gr. D (= media) ~ Phr. T; 3. Gr. TA (= tenuis aspirata) ~ Phr. M. The first series must clearly be reconstructed as composed of voiceless stops. The second and third series are more problematic. With a bottom-up approach, neither the second nor the third series of correspondences are compatible with simple modal voicing, and the reflexes differ greatly in voice onset time. Rather, the defining feature distinguishing the two series was [±spread glottis], with ancillary vibration of the vocal cords. In PGPh. the second series was undergoing further spreading of the glottis. As the two languages split, this process would continue, but be affected by dissimilar changes in VOT, which was ultimately phonemicized in both languages as the defining feature distinguishing between their series of stops.

Keywords: bottom-up reconstruction, Proto-Graeco-Phrygian, spread glottis, syllabic resonant

Procedia PDF Downloads 22
78 Walking Cadence to Attain a Minimum of Moderate Aerobic Intensity in People at Risk of Cardiovascular Diseases

Authors: Fagner O. Serrano, Danielle R. Bouchard, Todd A. Duhame

Abstract:

Walking cadence (steps/min) is an effective way to prescribe exercise so an individual can reach a moderate intensity, which is recommended to optimize health benefits. To our knowledge, there is no study on the required walking cadence to reach a moderate intensity for people that present chronic conditions or risk factors for chronic conditions such as Cardiovascular Diseases (CVD). The objectives of this study were: 1- to identify the walking cadence needed for people at risk of CVD to a reach moderate intensity, and 2- to develop and test an equation using clinical variables to help professionals working with individuals at risk of CVD to estimate the walking cadence needed to reach moderate intensity. Ninety-one people presenting a minimum of two risk factors for CVD completed a medically supervised graded exercise test to assess maximum oxygen consumption at the first visit. The last visit consisted of recording walking cadence using a foot pod Garmin FR-60 and a Polar heart rate monitor, aiming to get participants to reach 40% of their maximal oxygen consumption using a portable metabolic cart on an indoor flat surface. The equation to predict the walking cadence needed to reach moderate intensity in this sample was developed as follows: The sample was randomly split in half and the equation was developed with one half of the participants, and validated using the other half. Body mass index, height, stride length, leg height, body weight, fitness level (VO2max), and self-selected cadence (over 200 meters) were measured using objective measured. Mean walking cadence to reach moderate intensity for people age 64.3 ± 10.3 years old at risk of CVD was 115.8  10.3 steps per minute. Body mass index, height, body weight, fitness level, and self-selected cadence were associated with walking cadence at moderate intensity when evaluated in bivariate analyses (r ranging from 0.22 to 0.52; all P values ≤0.05). Using linear regression analysis including all clinical variables associated in the bivariate analyses, body weight was the significant predictor of walking cadence for reaching a moderate intensity (ß=0.24; P=.018) explaining 13% of walking cadence to reach moderate intensity. The regression model created was Y = 134.4-0.24 X body weight (kg).Our findings suggest that people presenting two or more risk factors for CVD are reaching moderate intensity while walking at a cadence above the one officially recommended (116 steps per minute vs. 100 steps per minute) for healthy adults.

Keywords: cardiovascular disease, moderate intensity, older adults, walking cadence

Procedia PDF Downloads 421
77 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: Gaelle Candel, David Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning

Procedia PDF Downloads 122
76 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 35
75 Building on Previous Microvalving Approaches for Highly Reliable Actuation in Centrifugal Microfluidic Platforms

Authors: Ivan Maguire, Ciprian Briciu, Alan Barrett, Dara Kervick, Jens Ducrèe, Fiona Regan

Abstract:

With the ever-increasing myriad of applications of which microfluidic devices are capable, reliable fluidic actuation development has remained fundamental to the success of these microfluidic platforms. There are a number of approaches which can be taken in order to integrate liquid actuation on microfluidic platforms, which can usually be split into two primary categories; active microvalves and passive microvalves. Active microvalves are microfluidic valves which require a physical parameter change by external, or separate interaction, for actuation to occur. Passive microvalves are microfluidic valves which don’t require external interaction for actuation due to the valve’s natural physical parameters, which can be overcome through sample interaction. The purpose of this paper is to illustrate how further improvements to past microvalve solutions can largely enhance systematic reliability and performance, with both novel active and passive microvalves demonstrated. Covered within this scope will be two alternative and novel microvalve solutions for centrifugal microfluidic platforms; a revamped pneumatic-dissolvable film active microvalve (PAM) strategy and a spray-on Sol-Gel based hydrophobic passive microvalve (HPM) approach. Both the PAM and the HPM mechanisms were demonstrated on a centrifugal microfluidic platform consisting of alternating layers of 1.5 mm poly(methyl methacrylate) (PMMA) (for reagent storage) sheets and ~150 μm pressure sensitive adhesive (PSA) (for microchannel fabrication) sheets. The PAM approach differs from previous SOLUBON™ dissolvable film methods by introducing a more reliable and predictable liquid delivery mechanism to microvalve site, thus significantly reducing premature activation. This approach has also shown excellent synchronicity when performed in a multiplexed form. The HPM method utilises a new spray-on and low curing temperature (70°C) sol-gel material. The resultant double layer coating comprises a PMMA adherent sol-gel as the bottom layer and an ultra hydrophobic silica nano-particles (SNPs) film as the top layer. The optimal coating was integrated to microfluidic channels with varying cross-sectional area for assessing microvalve burst frequencies consistency. It is hoped that these microvalving solutions, which can be easily added to centrifugal microfluidic platforms, will significantly improve automation reliability.

Keywords: centrifugal microfluidics, hydrophobic microvalves, lab-on-a-disc, pneumatic microvalves

Procedia PDF Downloads 170
74 A User-Directed Approach to Optimization via Metaprogramming

Authors: Eashan Hatti

Abstract:

In software development, programmers often must make a choice between high-level programming and high-performance programs. High-level programming encourages the use of complex, pervasive abstractions. However, the use of these abstractions degrades performance-high performance demands that programs be low-level. In a compiler, the optimizer attempts to let the user have both. The optimizer takes high-level, abstract code as an input and produces low-level, performant code as an output. However, there is a problem with having the optimizer be a built-in part of the compiler. Domain-specific abstractions implemented as libraries are common in high-level languages. As a language’s library ecosystem grows, so does the number of abstractions that programmers will use. If these abstractions are to be performant, the optimizer must be extended with new optimizations to target them, or these abstractions must rely on existing general-purpose optimizations. The latter is often not as effective as needed. The former presents too significant of an effort for the compiler developers, as they are the only ones who can extend the language with new optimizations. Thus, the language becomes more high-level, yet the optimizer – and, in turn, program performance – falls behind. Programmers are again confronted with a choice between high-level programming and high-performance programs. To investigate a potential solution to this problem, we developed Peridot, a prototype programming language. Peridot’s main contribution is that it enables library developers to easily extend the language with new optimizations themselves. This allows the optimization workload to be taken off the compiler developers’ hands and given to a much larger set of people who can specialize in each problem domain. Because of this, optimizations can be much more effective while also being much more numerous. To enable this, Peridot supports metaprogramming designed for implementing program transformations. The language is split into two fragments or “levels”, one for metaprogramming, the other for high-level general-purpose programming. The metaprogramming level supports logic programming. Peridot’s key idea is that optimizations are simply implemented as metaprograms. The meta level supports several specific features which make it particularly suited to implementing optimizers. For instance, metaprograms can automatically deduce equalities between the programs they are optimizing via unification, deal with variable binding declaratively via higher-order abstract syntax, and avoid the phase-ordering problem via non-determinism. We have found that this design centered around logic programming makes optimizers concise and easy to write compared to their equivalents in functional or imperative languages. Overall, implementing Peridot has shown that its design is a viable solution to the problem of writing code which is both high-level and performant.

Keywords: optimization, metaprogramming, logic programming, abstraction

Procedia PDF Downloads 63
73 Useful Lessons from the Success of Physics Outreach in Jamaica

Authors: M. J. Ponnambalam

Abstract:

Physics Outreach in Jamaica has nearly tripled the number of students doing Introductory Calculus-based Physics at the University of the West Indies (UWI, Mona) within 5 years, and thus has shown the importance of Physics Teaching & Learning in Informal Settings. In 1899, the first president of the American Physical Society called Physics, “the science above all sciences.” Sure enough, exactly one hundred years later, Time magazine proclaims Albert Einstein, “Person of the Century.” Unfortunately, Physics seems to be losing that glow in this century. Many countries, big and small, are finding it difficult to attract bright young minds to pursue Physics. At UWI, Mona, the number of students in first year Physics dropped to an all-time low of 81 in 2006, from more than 200 in the nineteen eighties, spelling disaster for the Physics Department! The author of this paper launched an aggressive Physics Outreach that same year, aimed at conveying to the students and the general public the following messages: i) Physics is an exciting intellectual enterprise, full of fun and delight. ii) Physics is very helpful in understanding how things like TV, CD player, car, computer, X-ray, CT scan, MRI, etc. work. iii) The critical and analytical thinking developed in the study of Physics is of inestimable value in almost any field. iv) Physics is the core subject for Science and Technology, and hence of national development. Science Literacy is a ‘must’ for any nation in the 21st century. Hence, the Physics Outreach aims at reaching out to every person, through every possible means. The Outreach work is split into the following target groups: i) Universities, ii) High Schools iii) Middle Schools, iv) Primary Schools, v) General Public, and vi) Physics teachers in High Schools. The programmes, tools and best practices are adjusted to suit each target group. The feedback from each group is highly positive. e.g. In February 2014, the author conducted in 3 Primary Schools the Interactive Show on ‘Science Is Fun’ to stimulate 290 students’ interest in Science – with lively and interesting demonstrations and experiments in a highly interactive way, using dramatization, story-telling and dancing. The feedback: 47% found the Show ‘Exciting’ and 51% found it ‘Interesting’ – totaling an impressive 98%. When asked to describe the Show in their own words, the leading 4 responses were: ‘Fun’ (26%), ‘Interesting’ (20%), ‘Exciting’ (14%) and ‘Educational’ (10%) – confirming that ‘fun’ & ‘education’ can go together. The success of Physics Outreach in Jamaica verifies the following words of Chodos, Associate Executive Officer of the American Physical Society: “If we could get members to go to K-12 schools and levitate a magnet or something, we really think these efforts would bring great rewards.”

Keywords: physics education, physics popularization, UWI, Jamaica

Procedia PDF Downloads 374
72 Control of Helminthosporiosis in Oryza sativa Varieties Treated with 24-Epibrassinolide

Authors: Kuate Tueguem William Norbert, Ngoh Dooh Jules Patrice, Kone Sangou Abdou Nourou, Mboussi Serge Bertrand, Chewachang Godwill Mih, Essome Sale Charles, Djuissi Tohoto Doriane, Ambang Zachee

Abstract:

The objectives of this study were to evaluate the effects of foliar application of 24-epibrassinolide (EBR) on the development of rice helminthosporiosis caused by Bipolaris oryzae and its influence on the improvement of growth parameters and induction of the synthesis of defense substances in the rice plants. The experimental asset up involved a multifactorial split-plot with two varieties (NERICA 3 and local variety KAMKOU) and five treatments (T0: control, T1: EBR, T2: BANKO PLUS (fungicide), T3: NPK (chemical fertilizer), T4: mixture: NPK + BANKO PLUS + EBR) with three repetitions. Agro-morphological and epidemiological parameters, as well as substances for plant resistance, were evaluated over two growing seasons. The application of the EBR induced significant growth of the rice plants for the 2015 and 2016 growing seasons on the two varieties tested compared to the T0 treatment. At 74 days after sowing (DAS), NERICA 3 showed plant heights of 58.9 ± 5.4; 83.1 ± 10.4; 86.01 ± 9.4; 69.4 ± 11.1 and 87.12 ± 7.4 cm at T0; T1; T2; T3, and T4, respectively. Plant height for the variety KAMKOU varied from 87,12 ± 8,1; 88.1 ± 8.1 and 92.02 ± 6.3 cm in T1, T2, and T3 to 74.1 ± 8.6 and 74.21 ± 11.4 cm in T0 and T3. In accordance with the low rate of expansion of helminthosporiosis in experimental plots, EBR (T1) significantly reduced the development of the disease with severities of 0.0; 1.29, and 2.04%, respectively at 78; 92, and 111 DAS on the variety NERICA 3 compared with1; 3.15 and 3.79% in the control T0. The reduction of disease development/severity as a result of the application of EBR is due to the induction of acquired resistance of rice varieties through increased phenol (13.73 eqAG/mg/PMF) and total protein (117.89 eqBSA/mg/PMF) in the T1 treatment against 5.37 eqAG/mg/PMF and 104.97 eqBSA/mg/PMF in T0 for the NERICA 3 variety. Similarly, on the KAMKOU variety, 148.53 eqBSA/mg/PMF were protein and 6.10 eqAG/mg/PMF of phenol in T1. In summary, the results show the significant effect of EBR on plant growth, yield, synthesis of secondary metabolites and defense proteins, and disease resistance. The EBR significantly reduced losses of rice grains by causing an average gain of about 1.55 t/ha compared to the control and 1.00 t/ha compared to the NPK-based treatment for the two varieties studied. Further, the enzymatic activities of PPOs, POXs, and PR2s were higher in leaves from treated EBR-based plants. These results show that 24-epibrassinolide can be used in the control of helminthosporiosis of rice to reduce disease and increase yields.

Keywords: Oryza sativa, 24-epibrassinolide, helminthosporiosis, secondary metabolites, PR proteins, acquired resistance

Procedia PDF Downloads 169
71 Analysis of the Homogeneous Turbulence Structure in Uniformly Sheared Bubbly Flow Using First and Second Order Turbulence Closures

Authors: Hela Ayeb Mrabtini, Ghazi Bellakhal, Jamel Chahed

Abstract:

The presence of the dispersed phase in gas-liquid bubbly flow considerably alters the liquid turbulence. The bubbles induce turbulent fluctuations that enhance the global liquid turbulence level and alter the mechanisms of turbulence. RANS modeling of uniformly sheared flows on an isolated sphere centered in a control volume is performed using first and second order turbulence closures. The sphere is placed in the production-dissipation equilibrium zone where the liquid velocity is set equal to the relative velocity of the bubbles. The void fraction is determined by the ratio between the sphere volume and the control volume. The analysis of the turbulence statistics on the control volume provides numerical results that are interpreted with regard to the effect of the bubbles wakes on the turbulence structure in uniformly sheared bubbly flow. We assumed for this purpose that at low void fraction where there is no hydrodynamic interaction between the bubbles, the single-phase flow simulation on an isolated sphere is representative on statistical average of a sphere network. The numerical simulations were firstly validated against the experimental data of bubbly homogeneous turbulence with constant shear and then extended to produce numerical results for a wide range of shear rates from 0 to 10 s^-1. These results are compared with our turbulence closure proposed for gas-liquid bubbly flows. In this closure, the turbulent stress tensor in the liquid is split into a turbulent dissipative part produced by the gradient of the mean velocity which also contains the turbulence generated in the bubble wakes and a pseudo-turbulent non-dissipative part induced by the bubbles displacements. Each part is determined by a specific transport equation. The simulations of uniformly sheared flows on an isolated sphere reproduce the mechanisms related to the turbulent part, and the numerical results are in perfect accordance with the modeling of the transport equation of the turbulent part. The reduction of second order turbulence closure provides a description of the modification of turbulence structure by the bubbles presence using a dimensionless number expressed in terms of two-time scales characterizing the turbulence induced by the shear and that induced by bubbles displacements. The numerical simulations carried out in the framework of a comprehensive analysis reproduce particularly the attenuation of the turbulent friction showed in the experimental results of bubbly homogeneous turbulence subjected to a constant shear.

Keywords: gas-liquid bubbly flows, homogeneous turbulence, turbulence closure, uniform shear

Procedia PDF Downloads 439
70 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 111
69 Development of Taiwanese Sign Language Receptive Skills Test for Deaf Children

Authors: Hsiu Tan Liu, Chun Jung Liu

Abstract:

It has multiple purposes to develop a sign language receptive skills test. For example, this test can be used to be an important tool for education and to understand the sign language ability of deaf children. There is no available test for these purposes in Taiwan. Through the discussion of experts and the references of standardized Taiwanese Sign Language Receptive Test for adults and adolescents, the frame of Taiwanese Sign Language Receptive Skills Test (TSL-RST) for deaf children was developed, and the items were further designed. After multiple times of pre-trials, discussions and corrections, TSL-RST is finally developed which can be conducted and scored online. There were 33 deaf children who agreed to be tested from all three deaf schools in Taiwan. Through item analysis, the items were picked out that have good discrimination index and fair difficulty index. Moreover, psychometric indexes of reliability and validity were established. Then, derived the regression formula was derived which can predict the sign language receptive skills of deaf children. The main results of this study are as follows. (1). TSL-RST includes three sub-test of vocabulary comprehension, syntax comprehension and paragraph comprehension. There are 21, 20, and 9 items in vocabulary comprehension, syntax comprehension, and paragraph comprehension, respectively. (2). TSL-RST can be conducted individually online. The sign language ability of deaf students can be calculated fast and objectively, so that they can get the feedback and results immediately. This can also contribute to both teaching and research. The most subjects can complete the test within 25 minutes. While the test procedure, they can answer the test questions without relying on their reading ability or memory capacity. (3). The sub-test of the vocabulary comprehension is the easiest one, syntax comprehension is harder than vocabulary comprehension and the paragraph comprehension is the hardest. Each of the three sub-test and the whole test are good in item discrimination index. (4). The psychometric indices are good, including the internal consistency reliability (Cronbach’s α coefficient), test-retest reliability, split-half reliability, and content validity. The sign language ability are significantly related to non-verbal IQ, the teachers’ rating to the students’ sign language ability and students’ self-rating to their own sign language ability. The results showed that the higher grade students have better performance than the lower grade students, and students with deaf parent perform better than those with hearing parent. These results made TLS-RST have great discriminant validity. (5). The predictors of sign language ability of primary deaf students are age and years of starting to learn sign language. The results of this study suggested that TSL-RST can effectively assess deaf student’s sign language ability. This study also proposed a model to develop a sign language tests.

Keywords: comprehension test, elementary school, sign language, Taiwan sign language

Procedia PDF Downloads 166
68 Probiotics as an Alternative to Antibiotic Use in Pig Production

Authors: Z. C. Dlamini, R. L. S. Langa, A. I. Okoh, O. A. Aiyegoro

Abstract:

The indiscriminate usage of antibiotics in swine production have consequential outcomes; such as development of bacterial resistance to prophylactic antibiotics and possibility of antibiotic residues in animal products. The use of probiotics appears to be the most effective procedure with positive metabolic nutritional implications. The aim of this study was to investigate the efficacy of probiotic bacteria (Lactobacillus reuteri ZJ625, Lactobacillus reuteri VB4, Lactobacillus salivarius ZJ614 and Streptococcus salivarius NBRC13956) administered as direct-fed microorganisms in weaned piglets. 45 weaned piglets blocked by weight were dived into 5 treatments groups: diet with antibiotic, diet with no-antibiotic and no probiotic, and diet with probiotic and diet with combination of probiotics. Piglets performance was monitored during the trials. Faecal and Ileum samples were collected for microbial count analysis. Blood samples were collected from pigs at the end of the trial, for analysis of haematological, biochemical and IgG stimulation. The data was analysed by Split-Plot ANOVA using SAS statistically software (SAS 9.3) (2003). The difference was observed between treatments for daily weight and feed conversion ratio. No difference was observed in analysis of faecal samples in regards with bacterial counts, difference was observed in ileums samples with enteric bacteria colony forming unit being lower in P2 treatment group as compared with lactic acid and total bacteria. With exception of globulin and albumin, biochemistry blood parameters were not affected, likewise for haematology, only basophils and segmented neutrophils were differed by having higher concentration in NC treatment group as compared with other treatment groups. Moreover, in IgG stimulation analysis, difference was also observed, with P2 treatment group having high concentration of IgG in P2 treatment group as compared to other groups. The results of this study suggest that probiotics have a beneficial effect on growth performances, blood parameters and IgG stimulation of pigs, most effective when they are administered in synergy form. This means that it is most likely that these probiotics will offer a significant benefit in pig farming by reducing risk of morbidity and mortality and produce quality meat that is more affordable to poorer communities, and thereby enhance South African pig industry’s economy. In addition, these results indicate that there is still more research need to be done on probiotics in regards with, i.e. dosage, shelf life and mechanism of action.

Keywords: antibiotics, biochemistry, haematology, IgG-stimulation, microbial count, probiotics

Procedia PDF Downloads 266
67 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.

Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy

Procedia PDF Downloads 118
66 Transport Mode Selection under Lead Time Variability and Emissions Constraint

Authors: Chiranjit Das, Sanjay Jharkharia

Abstract:

This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.

Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection

Procedia PDF Downloads 403
65 Analysis of Metamaterial Permeability on the Performance of Loosely Coupled Coils

Authors: Icaro V. Soares, Guilherme L. F. Brandao, Ursula D. C. Resende, Glaucio L. Siqueira

Abstract:

Electrical energy can be wirelessly transmitted through resonant coupled coils that operate in the near-field region. Once in this region, the field has evanescent character, the efficiency of Resonant Wireless Power Transfer (RWPT) systems decreases proportionally with the inverse cube of distance between the transmitter and receiver coils. The commercially available RWPT systems are restricted to short and mid-range applications in which the distance between coils is lesser or equal to the coil size. An alternative to overcome this limitation is applying metamaterial structures to enhance the coupling between coils, thus reducing the field decay along the distance between them. Metamaterials can be conceived as composite materials with periodic or non-periodic structure whose unconventional electromagnetic behaviour is due to its unit cell disposition and chemical composition. This new kind of material has been used in frequency selective surfaces, invisibility cloaks, leaky-wave antennas, among other applications. However, for RWPT it is mainly applied as superlenses which are lenses that can overcome the optical limitation and are made of left-handed media, that is, a medium with negative magnetic permeability and electric permittivity. As RWPT systems usually operate at wavelengths of hundreds of meters, the metamaterial unit cell size is much smaller than the wavelength. In this case, electric and magnetic field are decoupled, therefore the double negative condition for superlenses are not required and the negative magnetic permeability is enough to produce an artificial magnetic medium. In this work, the influence of the magnetic permeability of a metamaterial slab inserted between two loosely coupled coils is studied in order to find the condition that leads to the maximum transmission efficiency. The metamaterial used is formed by a subwavelength unit cell that consist of a capacitor-loaded split ring with an inner spiral that is designed and optimized using the software Computer Simulation Technology. The unit cell permeability is experimentally characterized by the ratio of the transmission parameters between coils measured with and without the presence of the metamaterial slab. Early measurements results show that the transmission coefficient at the resonant frequency after the inclusion of the metamaterial is about three times higher than with just the two coils, which confirms the enhancement that this structure brings to RWPT systems.

Keywords: electromagnetic lens, loosely coupled coils, magnetic permeability, metamaterials, resonant wireless power transfer, subwavelength unit cells

Procedia PDF Downloads 129
64 Molecular Dynamics Study of Ferrocene in Low and Room Temperatures

Authors: Feng Wang, Vladislav Vasilyev

Abstract:

Ferrocene (Fe(C5H5)2, i.e., di-cyclopentadienyle iron (FeCp2) or Fc) is a unique example of ‘wrong but seminal’ in chemistry history. It has significant applications in a number of areas such as homogeneous catalysis, polymer chemistry, molecular sensing, and nonlinear optical materials. However, the ‘molecular carousel’ has been a ‘notoriously difficult example’ and subject to long debate for its conformation and properties. Ferrocene is a dynamic molecule. As a result, understanding of the dynamical properties of ferrocene is very important to understand the conformational properties of Fc. In the present study, molecular dynamic (MD) simulations are performed. In the simulation, we use 5 geometrical parameters to define the overall conformation of Fc and all the rest is a thermal noise. The five parameters are defined as: three parameters d---the distance between two Cp planes, α and δ to define the relative positions of the Cp planes, in which α is the angle of the Cp tilt and δ the angle the two Cp plane rotation like a carousel. Two parameters to position the Fe atom between two Cps, i.e., d1 for Fe-Cp1 and d2 for Fe-Cp2 distances. Our preliminary MD simulation discovered the five parameters behave differently. Distances of Fe to the Cp planes show that they are independent, practically identical without correlation. The relative position of two Cp rings, α, indicates that the two Cp planes are most likely not in a parallel position, rather, they tilt in a small angle α≠ 0°. The mean plane dihedral angle δ ≠ 0°. Moreover, δ is neither 0° nor 36°, indicating under those conditions, Fc is neither in a perfect eclipsed structure nor a perfect staggered structure. The simulations show that when the temperature is above 80K, the conformers are virtually in free rotations, A very interesting result from the MD simulation is the five C-Fe bond distances from the same Cp ring. They are surprisingly not identical but in three groups of 2, 2 and 1. We describe the pentagon formed by five carbon atoms as ‘turtle swimming’ for the motion of the Cp rings of Fc as shown in their dynamical animation video. The Fe- C(1) and Fe-C(2) which are identical as ‘the turtle back legs’, Fe-C(3) and Fe-C(4) which are also identical as turtle front paws’, and Fe-C(5) ---’the turtle head’. Such as ‘turtle swimming’ analog may be able to explain the single substituted derivatives of Fc. Again, the mean Fe-C distance obtained from MD simulation is larger than the quantum mechanically calculated Fe-C distances for eclipsed and staggered Fc, with larger deviation with respect to the eclipsed Fc than the staggered Fc. The same trend is obtained for the five Fe-C-H angles from same Cp ring of Fc. The simulated mean IR spectrum at 7K shows split spectral peaks at approximately 470 cm-1 and 488 cm-1, in excellent agreement with quantum mechanically calculated gas phase IR spectrum for eclipsed Fc. As the temperature increases over 80K, the clearly splitting IR spectrum become a very board single peak. Preliminary MD results will be presented.

Keywords: ferrocene conformation, molecular dynamics simulation, conformer orientation, eclipsed and staggered ferrocene

Procedia PDF Downloads 191
63 A Development of English Pronunciation Using Principles of Phonetics for English Major Students at Loei Rajabhat University

Authors: Pongthep Bunrueng

Abstract:

This action research accentuates the outcome of a development in English pronunciation, using principles of phonetics for English major students at Loei Rajabhat University. The research is split into 5 separate modules: 1) Organs of Speech and How to Produce Sounds, 2) Monopthongs, 3) Diphthongs, 4) Consonant sounds, and 5) Suprasegmental Features. Each module followed a 4 step action research process, 1) Planning, 2) Acting, 3) Observing, and 4) Reflecting. The research targeted 2nd year students who were majoring in English Education at Loei Rajabhat University during the academic year of 2011. A mixed methodology employing both quantitative and qualitative research was used, which put theory into action, taking segmental features up to suprasegmental features. Multiple tools were employed which included the following documents: pre-test and post-test papers, evaluation and assessment papers, group work assessment forms, a presentation grading form, an observation of participants form and a participant self-reflection form. All 5 modules for the target group showed that results from the post-tests were higher than those of the pre-tests, with 0.01 statistical significance. All target groups attained results ranging from low to moderate and from moderate to high performance. The participants who attained low to moderate results had to re-sit the second round. During the first development stage, participants attended classes with group participation, in which they addressed planning through mutual co-operation and sharing of responsibility. Analytic induction of strong points for this operation illustrated that learner cognition, comprehension, application, and group practices were all present whereas the participants with weak results could be attributed to biological differences, differences in life and learning, or individual differences in responsiveness and self-discipline. Participants who were required to be re-treated in Spiral 2 received the same treatment again. Results of tests from the 5 modules after the 2nd treatment were that the participants attained higher scores than those attained in the pre-test. Their assessment and development stages also showed improved results. They showed greater confidence at participating in activities, produced higher quality work, and correctly followed instructions for each activity. Analytic induction of strong and weak points for this operation remains the same as for Spiral 1, though there were improvements to problems which existed prior to undertaking the second treatment.

Keywords: action research, English pronunciation, phonetics, segmental features, suprasegmental features

Procedia PDF Downloads 274
62 Management of Postoperative Pain, Intercultural Differences Among Registered Nurses: Czech Republic and Kingdom of Saudi Arabia

Authors: Denisa Mackova, Andrea Pokorna

Abstract:

The management of postoperative pain is a meaningful part of quality care. The experience and knowledge of registered nurses in postoperative pain management can be influenced by local know-how. Therefore, the research helps to understand the cultural differences between two countries with the aim of evaluating the management of postoperative pain management among the nurses from the Czech Republic and the Kingdom of Saudi Arabia. Both countries have different procedures on managing postoperative pain and the research will provide an understanding of both the advantages and disadvantages of the procedures and also highlight the knowledge and experience of registered nurses in both countries. Between the Czech Republic and the Kingdom of Saudi Arabia, the expectation is for differing results in the usage of opioid analgesia for the patients postoperatively and in the experience of registered nurses with Patient Controlled Analgesia. The aim is to evaluate the knowledge and awareness of registered nurses and to merge the data with the postoperative pain management in the early postoperative period in the Czech Republic and the Kingdom of Saudi Arabia. Also, the aim is to assess the knowledge and experience of registered nurses by using Patient Controlled Analgesia and epidural analgesia treatment in the early postoperative period. The criteria for those providing input into the study, are registered nurses, working in surgical settings (standard departments, post-anesthesia care unit, day care surgery or ICU’s) caring for patients in the postoperative period. Method: Research is being conducted by questionnaires. It is a quantitative research, a comparative study of registered nurses in the Czech Republic and the Kingdom of Saudi Arabia. Questionnaire surveys were distributed through an electronic Bristol online survey. Results: The collection of the data in the Kingdom of Saudi Arabia has been completed successfully, with 550 respondents, 77 were excluded and 473 respondents were included for statistical data analysis. The outcome of the research is expected to highlight the differences in treatment through Patient Controlled Analgesia, with more frequent use in the Kingdom of Saudi Arabia. A similar assumption is expected for treatment conducted by analgesia. We predict that opioids will be used more regularly in the Kingdom of Saudi Arabia, whilst therapy through NSAID’s being the most common approach in the Czech Republic. Discussion/Conclusion: The majority of respondents from the Kingdom of Saudi Arabia were female registered nurses from a multitude of nations. We are expecting a similar split in gender between the Czech Republic respondents; however, there will be a smaller number of nationalities. Relevance for research and practice: Output from the research will assess the knowledge, experience and practice of patient controlled analgesia and epidural analgesia treatment. Acknowledgement: This research was accepted and affiliated to the project: Postoperative pain management, knowledge and experience registered nurses (Czech Republic and Kingdom of Saudi Arabia) – SGS05/2019-2020.

Keywords: acute postoperative pain, epidural analgesia, nursing care, patient controlled analgesia

Procedia PDF Downloads 155
61 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay

Abstract:

Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 165
60 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 172