Search results for: fitted
21 Direct Contact Ultrasound Assisted Drying of Mango Slices
Authors: E. K. Mendez, N. A. Salazar, C. E. Orrego
Abstract:
There is undoubted proof that increasing the intake of fruit lessens the risk of hypertension, coronary heart disease, stroke, and probable evidence that lowers the risk of cancer. Proper fruit drying is an excellent alternative to make their shelf-life longer, commercialization easier, and ready-to-eat healthy products or ingredients. The conventional way of drying is by hot air forced convection. However, this process step often requires a very long residence time; furthermore, it is highly energy consuming and detrimental to the product quality. Nowadays, power ultrasound (US) technic has been considered as an emerging and promising technology for industrial food processing. Most of published works dealing with drying food assisted by US have studied the effect of ultrasonic pre-treatment prior to air-drying on food and the airborne US conditions during dehydration. In this work a new approach was tested taking in to account drying time and two quality parameters of mango slices dehydrated by convection assisted by 20 KHz power US applied directly using a holed plate as product support and sound transmitting surface. During the drying of mango (Mangifera indica L.) slices (ca. 6.5 g, 0.006 m height and 0.040 m diameter), their weight was recorded every hour until final moisture content (10.0±1.0 % wet basis) was reached. After previous tests, optimization of three drying parameters - frequencies (2, 5 and 8 minutes each half-hour), air temperature (50-55-60⁰C) and power (45-70-95W)- was attempted by using a Box–Behnken design under the response surface methodology for the optimal drying time, color parameters and rehydration rate of dried samples. Assays involved 17 experiments, including a quintuplicate of the central point. Dried samples with and without US application were packed in individual high barrier plastic bags under vacuum, and then stored in the dark at 8⁰C until their analysis. All drying assays and sample analysis were performed in triplicate. US drying experimental data were fitted with nine models, among which the Verna model resulted in the best fit with R2 > 0.9999 and reduced χ2 ≤ 0.000001. Significant reductions in drying time were observed for the assays that used lower frequency and high US power. At 55⁰C, 95 watts and 2 min/30 min of sonication, 10% moisture content was reached in 211 min, as compared with 320 min for the same test without the use of US (blank). Rehydration rates (RR), defined as the ratio of rehydrated sample weight to that of dry sample and measured, was also larger than those of blanks and, in general, the higher the US power, the greater the RR. The direct contact and intermittent US treatment of mango slices used in this work improve drying rates and dried fruit rehydration ability. This technique can thus be used to reduce energy processing costs and the greenhouse gas emissions of fruit dehydration.Keywords: ultrasonic assisted drying, fruit drying, mango slices, contact ultrasonic drying
Procedia PDF Downloads 34520 Pedagogy of the Oppressed: Fifty Years Later. Implications for Policy and Reforms
Authors: Mohammad Ibrahim Alladin
Abstract:
The Pedagogy of the Oppressed by Paulo Freire was first published in 1970. Since its publication it has become one of most cited book in the social sciences. Over a million copies have been sold worldwide. The Pedagogy of the Oppressed by Paulo Freire was published in 1970 (New York: Herder and Herder), The book has caused a “revolution” in the education world and his theory has been examined and analysed. It has influenced educational policy, curriculum development and teacher education. The revolution started half a century ago. “Paolo Freire’s Pedagogy of the Oppressed develops a theory of education fitted to the needs of the disenfranchised and marginalized members of capitalist societies. Combining educational and political philosophy, the book offers an analysis of oppression and a theory of liberation. Freire believes that traditional education serves to support the dominance of the powerful within society and thereby maintain the powerful’s social, political, and economic status quo. To overcome the oppression endemic to an exploitative society, education must be remade to inspire and enable the oppressed in their struggle for liberation. This new approach to education focuses on consciousness-raising, dialogue, and collaboration between teacher and student in the effort to achieve greater humanization for all. For Freire, education is political and functions either to preserve the current social order or to transform it. The theories of education and revolutionary action he offers in Pedagogy of the Oppressed are addressed educators committed to the struggle for liberation from oppression. Freire’s own commitment to this struggle developed through years of teaching literacy to Brazilian and Chilean peasants and laborers. His efforts at educational and political reform resulted in a brief period of imprisonment followed exile from his native Brazil for fifteen years. In Pedagogy of the Oppressed begins Freire asserts the importance of consciousness-raising, or conscientização, as the means enabling the oppressed to recognize their oppression and commit to the effort to overcome it, taking full responsibility for themselves in the struggle for liberation. He addresses the “fear of freedom,” which inhibits the oppressed from assuming this responsibility. He also cautions against the dangers of sectarianism, which can undermine the revolutionary purpose as well as serve as a refuge for the committed conservative. Freire provides an alternative view of education by attacking tradition education and knowledge. He is highly critical of how is imparted and how knowledge is structured that limits the learner’s thinking. Hence, education becomes oppressive and school functions as an institution of social control. Since its publication, education has gone through a series of reforms and in some areas total transformation. This paper addresses the following: The role of education in social transformation The teacher/learner relationship :Critical thinking The paper essentially examines what happened in the last fifty years since Freire’s book. It seeks to explain what happened to Freire’s education revolution, and what is the status of the movement that started almost fifty years ago.Keywords: pedagogy, reform, curriculum, teacher education
Procedia PDF Downloads 9319 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications
Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky
Abstract:
InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor
Procedia PDF Downloads 25518 Evaluation of Different Inoculation Methods of Entomopathogenic Fungi on Their Endophytism and Pathogenicity against Chilo partellus (Swinhoe)
Authors: Mubashar Iqbal, Iqra Anjum, Muhammad Dildar Gogi, Muhammad Jalal Arif
Abstract:
The present study was carried to screen out the effective entomopathogenic fungi (EPF) inoculation method in maize and to evaluate pathogenicity and oviposition-choice in C. partellus. Three entomopathogenic fungi (EPF) formulations Pacer® (Metarhizium anisopliae), Racer® (Beauveria bassiana) and Meailkil® (Verticillium lecanii) were evaluated at three concentrations (5000, 10000 and 20000 ppm) for their endophytism in maize and pathogenicity in C. partellus. The stock solution of the highest concentration (20,000 ppm) was prepared and next lower from stock solution. In the first experiment, three EPF was inoculated in maize plant by four methods, i.e., leaf-inoculation (LI), whorl-inoculation (WI), shoot-inoculation (SI) and root-inoculation (RI). Leaf-discs and stem-cutting were sampled in all four inoculation methods and placed on fungus growth media in Petri dishes. In the second experiment, pathogenicity, pupal formation, adult emergence, sex ratio, oviposition-choice, and growth index of C. partellus were calculated. The leaves and stem of the inoculated plants were given to the counted number of larvae of C. Partellus. The mortality of larvae was recorded on daily basis till the pupation. The result shows that maximum percent mortality (86.67%) was recorded at high concentration (20000ppm) of Beauveria bassiana by leaf inoculation method. For oviposition choice bioassay, the newly emerged adults were fed on diet (water, honey and yeast in 9:1:1) for 48 hours. One pair of C. Partellus were aspirated from the rearing cages and were detained in large test tube plugged with diet soaked cotton. A set of four plants for each treatment were prepared and randomized inside the large oviposition chamber. The test tubes were opened and fitted in the hole made in the wall of oviposition chamber in front of each treatment. The oviposition chamber was placed in a completely dark laboratory to eliminate the effect of light on moth’s behavior. The plants were removed from the oviposition chamber after the death of adults. The number of eggs deposited on the plant was counted. The results of 2nd experiment revealed that in all EPF and inoculation methods, the fecundity, egg fertility and growth index of C. partellus decreased with the increase in concentration being significantly higher at low concentration (5000ppm) and lower at higher concentration (20000ppm). Application of B. bassiana demonstrated that minimum fecundity (126.83), egg fertility (119.52) and growth index (15%) in C. partellus followed by M. anisopliae with fecundity (135.93), egg fertility (132.29) and growth index (17.50%) while V. lecanii show higher values of fecundity (137.37), egg fertility (1135.42) and growth index (20%). Overall leaf inoculation method showed least fecundity (123.89) with egg fertility (115.36) and growth index (14%) followed by whorl, shoot inoculation method and root inoculation method show higher values of fecundity, egg fertility and growth index.Keywords: Beauveria bassiana, Chilo partellus, entomopathoganic, Metarhizium anisopliae, Verticillium lecanii
Procedia PDF Downloads 13817 Co-Movement between Financial Assets: An Empirical Study on Effects of the Depreciation of Yen on Asia Markets
Authors: Yih-Wenn Laih
Abstract:
In recent times, the dependence and co-movement among international financial markets have become stronger than in the past, as evidenced by commentaries in the news media and the financial sections of newspapers. Studying the co-movement between returns in financial markets is an important issue for portfolio management and risk management. The realization of co-movement helps investors to identify the opportunities for international portfolio management in terms of asset allocation and pricing. Since the election of the new Prime Minister, Shinzo Abe, in November 2012, the yen has weakened against the US dollar from the 80 to the 120 level. The policies, known as “Abenomics,” are to encourage private investment through a more aggressive mix of monetary and fiscal policy. Given the close economic relations and competitions among Asia markets, it is interesting to discover the co-movement relations, affected by the depreciation of yen, between stock market of Japan and 5 major Asia stock markets, including China, Hong Kong, Korea, Singapore, and Taiwan. Specifically, we devote ourselves to measure the co-movement of stock markets between Japan and each one of the 5 Asia stock markets in terms of rank correlation coefficients. To compute the coefficients, return series of each stock market is first fitted by a skewed-t GARCH (generalized autoregressive conditional heteroscedasticity) model. Secondly, to measure the dependence structure between matched stock markets, we employ the symmetrized Joe-Clayton (SJC) copula to calculate the probability density function of paired skewed-t distributions. The joint probability density function is then utilized as the scoring scheme to optimize the sequence alignment by dynamic programming method. Finally, we compute the rank correlation coefficients (Kendall's and Spearman's ) between matched stock markets based on their aligned sequences. We collect empirical data of 6 stock indexes from Taiwan Economic Journal. The data is sampled at a daily frequency covering the period from January 1, 2013 to July 31, 2015. The empirical distributions of returns indicate fatter tails than the normal distribution. Therefore, the skewed-t distribution and SJC copula are appropriate for characterizing the data. According to the computed Kendall’s τ, Korea has the strongest co-movement relation with Japan, followed by Taiwan, China, and Singapore; the weakest is Hong Kong. On the other hand, the Spearman’s ρ reveals that the strength of co-movement between markets with Japan in decreasing order are Korea, China, Taiwan, Singapore, and Hong Kong. We explore the effects of “Abenomics” on Asia stock markets by measuring the co-movement relation between Japan and five major Asia stock markets in terms of rank correlation coefficients. The matched markets are aligned by a hybrid method consisting of GARCH, copula and sequence alignment. Empirical experiments indicate that Korea has the strongest co-movement relation with Japan. The strength of China and Taiwan are better than Singapore. The Hong Kong market has the weakest co-movement relation with Japan.Keywords: co-movement, depreciation of Yen, rank correlation, stock market
Procedia PDF Downloads 23116 Rotational and Linear Accelerations of an Anthropometric Test Dummy Head from Taekwondo Kicks among Amateur Practitioners
Authors: Gabriel P. Fife, Saeyong Lee, David M. O'Sullivan
Abstract:
Introduction: Although investigations into injury characteristics are represented well in the literature, few have investigated the biomechanical characteristics associated with head impacts in Taekwondo. Therefore, the purpose of this study was to identify the kinematic characteristics of head impacts due to taekwondo kicks among non-elite practitioners. Participants: Male participants (n= 11, 175 + 5.3 cm, 71 + 8.3 kg) with 7.5 + 3.6 years of taekwondo training volunteered for this study. Methods: Participants were asked to perform five repetitions of each technique (i.e., turning kick, spinning hook kick, spinning back kick, front axe kick, and clench axe kick) aimed at the Hybrid III head with their dominant kicking leg. All participants wore a protective foot pad (thickness = 12 mm) that is commonly used in competition and training. To simulate head impact in taekwondo, the target consisted of a Hybrid III 50th Percentile Crash Test Dummy (Hybrid III) head (mass = 5.1 kg) and neck (fitted with taekwondo headgear) secured to an aluminum support frame and positioned to each athlete’s standing height. The Hybrid III head form was instrumented with a 500 g tri-axial accelerometer (PCB Piezotronics) mounted to the head center of gravity to obtain resultant linear accelerations (RLA). Rotational accelerations were collected using three angular rate sensors mounted orthogonally to each other (Diversified Technical Systems ARS-12 K Angular Rate Sensor). The accelerometers were interfaced via a 3-channel, battery-powered integrated circuit piezoelectric sensor signal conditioner (PCB Piezotronics) and connected to a desktop computer for analysis. Acceleration data were captured using LABVIEW Signal Express and processed in accordance with SAE J211-1 channel frequency class 1000. Head injury criteria values (HIC) were calculated using the VSRSoftware. A one-way analysis of variance was used to determine differences between kicks, while the Tukey HSD test was employed for pairwise comparisons. The level of significance was set to an effect size of 0.20. All statistical analyses were done using R 3.1.0. Results: A statistically significant difference was observed in RLA (p = 0.00075); however, these differences were not clinically meaningful (η² = 0.04, 95% CI: -0.94 to 1.03). No differences were identified with ROTA (p = 0.734, η² = 0.0004, 95% CI: -0.98 to 0.98). A statistically significant difference (p < 0.001) between kicks in HIC was observed, with a medium effect (η2= 0.08, 95% CI: -0.98 to 1.07). However, the confidence interval of this difference indicates uncertainty. Tukey HSD test identified differences (p < 0.001) between kicking techniques in RLA and HIC. Conclusion: This study observed head impact levels that were comparable to previous studies of similar objectives and methodology. These data are important as impact measures from this study may be more representative of impact levels experienced by non-elite competitors. Although the clench axe kick elicited a lower RLA, the ROTA of this technique was higher than levels from other techniques (although not large differences in reference to effect sizes). As the axe kick has been reported to cause severe head injury, future studies may consider further study of this kick important.Keywords: Taekwondo, head injury, biomechanics, kicking
Procedia PDF Downloads 2615 Influence of Ride Control Systems on the Motions Response and Passenger Comfort of High-Speed Catamarans in Irregular Waves
Authors: Ehsan Javanmardemamgheisi, Javad Mehr, Jason Ali-Lavroff, Damien Holloway, Michael Davis
Abstract:
During the last decades, a growing interest in faster and more efficient waterborne transportation has led to the development of high-speed vessels for both commercial and military applications. To satisfy this global demand, a wide variety of arrangements of high-speed crafts have been proposed by designers. Among them, high-speed catamarans have proven themselves to be a suitable Roll-on/Roll-off configuration for carrying passengers and cargo due to widely spaced demi hulls, a wide deck zone, and a high ratio of deadweight to displacement. To improve passenger comfort and crew workability and enhance the operability and performance of high-speed catamarans, mitigating the severity of motions and structural loads using Ride Control Systems (RCS) is essential.In this paper, a set of towing tank tests was conducted on a 2.5 m scaled model of a 112 m Incat Tasmania high-speed catamaran in irregular head seas to investigate the effect of different ride control algorithms including linear and nonlinear versions of the heave control, pitch control, and local control on motion responses and passenger comfort of the full-scale ship. The RCS included a centre bow-fitted T-Foil and two transom-mounted stern tabs. All the experiments were conducted at the Australian Maritime College (AMC) towing tank at a model speed of 2.89 m/s (37 knots full scale), a modal period of 1.5 sec (10 sec full scale) and two significant wave heights of 60 mm and 90 mm, representing full-scale wave heights of 2.7 m and 4 m, respectively. Spectral analyses were performed using Welch’s power spectral density method on the vertical motion time records of the catamaran model to calculate heave and pitch Response Amplitude Operators (RAOs). Then, noting that passenger discomfort arises from vertical accelerations and that the vertical accelerations vary at different longitudinal locations within the passenger cabin due to the variations in amplitude and relative phase of the pitch and heave motions, the vertical accelerations were calculated at three longitudinal locations (LCG, T-Foil, and stern tabs). Finally, frequency-weighted Root Mean Square (RMS) vertical accelerations were calculated to estimate Motion Sickness Dose Value (MSDV) of the ship based on ISO 2631-recommendations. It was demonstrated that in small seas, implementing a nonlinear pitch control algorithm reduces the peak pitch motions by 41%, the vertical accelerations at the forward location by 46%, and motion sickness at the forward position by around 20% which provides great potential for further improvement in passenger comfort, crew workability, and operability of high-speed catamarans.Keywords: high-speed catamarans, ride control system, response amplitude operators, vertical accelerations, motion sickness, irregular waves, towing tank tests.
Procedia PDF Downloads 8214 Navigating States of Emergency: A Preliminary Comparison of Online Public Reaction to COVID-19 and Monkeypox on Twitter
Authors: Antonia Egli, Theo Lynn, Pierangelo Rosati, Gary Sinclair
Abstract:
The World Health Organization (WHO) defines vaccine hesitancy as the postponement or complete denial of vaccines and estimates a direct linkage to approximately 1.5 million avoidable deaths annually. This figure is not immune to public health developments, as has become evident since the global spread of COVID-19 from Wuhan, China in early 2020. Since then, the proliferation of influential, but oftentimes inaccurate, outdated, incomplete, or false vaccine-related information on social media has impacted hesitancy levels to a degree described by the WHO as an infodemic. The COVID-19 pandemic and related vaccine hesitancy levels have in 2022 resulted in the largest drop in childhood vaccinations of the 21st century, while the prevalence of online stigma towards vaccine hesitant consumers continues to grow. Simultaneously, a second disease has risen to global importance: Monkeypox is an infection originating from west and central Africa and, due to racially motivated online hate, was in August 2022 set to be renamed by the WHO. To better understand public reactions towards two viral infections that became global threats to public health no two years apart, this research examines user replies to threads published by the WHO on Twitter. Replies to two Tweets from the @WHO account declaring COVID-19 and Monkeypox as ‘public health emergencies of international concern’ on January 30, 2020, and July 23, 2022, are gathered using the Twitter application programming interface and user mention timeline endpoint. Research methodology is unique in its analysis of stigmatizing, racist, and hateful content shared on social media within the vaccine discourse over the course of two disease outbreaks. Three distinct analyses are conducted to provide insight into (i) the most prevalent topics and sub-topics among user reactions, (ii) changes in sentiment towards the spread of the two diseases, and (iii) the presence of stigma, racism, and online hate. Findings indicate an increase in hesitancy to accept further vaccines and social distancing measures, the presence of stigmatizing content aimed primarily at anti-vaccine cohorts and racially motivated abusive messages, and a prevalent fatigue towards disease-related news overall. This research provides value to non-profit organizations or government agencies associated with vaccines and vaccination programs in emphasizing the need for public health communication fitted to consumers' vaccine sentiments, levels of health information literacy, and degrees of trust towards public health institutions. Considering the importance of addressing fears among the vaccine hesitant, findings also illustrate the risk of alienation through stigmatization, lead future research in probing the relatively underexamined field of online, vaccine-related stigma, and discuss the potential effects of stigma towards vaccine hesitant Twitter users in their decisions to vaccinate.Keywords: social marketing, social media, public health communication, vaccines
Procedia PDF Downloads 9813 Inferring Influenza Epidemics in the Presence of Stratified Immunity
Authors: Hsiang-Yu Yuan, Marc Baguelin, Kin O. Kwok, Nimalan Arinaminpathy, Edwin Leeuwen, Steven Riley
Abstract:
Traditional syndromic surveillance for influenza has substantial public health value in characterizing epidemics. Because the relationship between syndromic incidence and the true infection events can vary from one population to another and from one year to another, recent studies rely on combining serological test results with syndromic data from traditional surveillance into epidemic models to make inference on epidemiological processes of influenza. However, despite the widespread availability of serological data, epidemic models have thus far not explicitly represented antibody titre levels and their correspondence with immunity. Most studies use dichotomized data with a threshold (Typically, a titre of 1:40 was used) to define individuals as likely recently infected and likely immune and further estimate the cumulative incidence. Underestimation of Influenza attack rate could be resulted from the dichotomized data. In order to improve the use of serosurveillance data, here, a refinement of the concept of the stratified immunity within an epidemic model for influenza transmission was proposed, such that all individual antibody titre levels were enumerated explicitly and mapped onto a variable scale of susceptibility in different age groups. Haemagglutination inhibition titres from 523 individuals and 465 individuals during pre- and post-pandemic phase of the 2009 pandemic in Hong Kong were collected. The model was fitted to serological data in age-structured population using Bayesian framework and was able to reproduce key features of the epidemics. The effects of age-specific antibody boosting and protection were explored in greater detail. RB was defined to be the effective reproductive number in the presence of stratified immunity and its temporal dynamics was compared to the traditional epidemic model using use dichotomized seropositivity data. Deviance Information Criterion (DIC) was used to measure the fitness of the model to serological data with different mechanisms of the serological response. The results demonstrated that the differential antibody response with age was present (ΔDIC = -7.0). The age-specific mixing patterns with children specific transmissibility, rather than pre-existing immunity, was most likely to explain the high serological attack rates in children and low serological attack rates in elderly (ΔDIC = -38.5). Our results suggested that the disease dynamics and herd immunity of a population could be described more accurately for influenza when the distribution of immunity was explicitly represented, rather than relying only on the dichotomous states 'susceptible' and 'immune' defined by the threshold titre (1:40) (ΔDIC = -11.5). During the outbreak, RB declined slowly from 1.22[1.16-1.28] in the first four months after 1st May. RB dropped rapidly below to 1 during September and October, which was consistent to the observed epidemic peak time in the late September. One of the most important challenges for infectious disease control is to monitor disease transmissibility in real time with statistics such as the effective reproduction number. Once early estimates of antibody boosting and protection are obtained, disease dynamics can be reconstructed, which are valuable for infectious disease prevention and control.Keywords: effective reproductive number, epidemic model, influenza epidemic dynamics, stratified immunity
Procedia PDF Downloads 26012 Multi-Plane Wrist Movement: Pathomechanics and Design of a 3D-Printed Splint
Authors: Sigal Portnoy, Yael Kaufman-Cohen, Yafa Levanon
Abstract:
Introduction: Rehabilitation following wrist fractures often includes exercising flexion-extension movements with a dynamic splint. However, during daily activities, we combine most of our wrist movements with radial and ulnar deviations. Also, the multi-plane wrist motion, named the ‘dart throw motion’ (DTM), was found to be a more stable motion in healthy individuals, in term of the motion of the proximal carpal bones, compared with sagittal wrist motion. The aim of this study was therefore to explore the pathomechanics of the wrist in a common multi-plane movement pattern (DTM) and design a novel splint for rehabilitation following distal radius fractures. Methods: First, a multi-axis electro-goniometer was used to quantify the plane angle of motion of the dominant and non-dominant wrists during various activities, e.g. drinking from a glass of water and answering a phone in 43 healthy individuals. The following protocols were then implemented with a population following distal radius fracture. Two dynamic scans were performed, one of the sagittal wrist motion and DTM, in a 3T magnetic resonance imaging (MRI) device, bilaterally. The scaphoid and lunate carpal bones, as well as the surface of the distal radius, were manually-segmented in SolidWorks and the angles of motion of the scaphoid and lunate bones were calculated. Subsequently, a patient-specific splint was designed using 3D scans of the hand. The brace design comprises of a proximal attachment to the arm and a distal envelope of the palm. An axle with two wheels is attached to the proximal part. Two wires attach the proximal part with the medial-palmar and lateral-ventral aspects of the distal part: when the wrist extends, the first wire is released and the second wire is strained towards the radius. The opposite occurs when the wrist flexes. The splint was attached to the wrist using Velcro and constrained the wrist movement to the desired calculated multi-plane of motion. Results: No significant differences were found between the multi-plane angles of the dominant and non-dominant wrists. The most common daily activities occurred at a plane angle of approximately 20° to 45° from the sagittal plane and the MRI studies show individual angles of the plane of motion. The printed splint fitted the wrist of the subjects and constricted movement to the desired multi-plane of motion. Hooks were inserted on each part to allow the addition of springs or rubber bands for resistance training towards muscle strengthening in the rehabilitation setting. Conclusions: It has been hypothesized that activation of the wrist in a multi-plane movement pattern following distal radius fractures will accelerate the recovery of the patient. Our results show that this motion can be determined from either the dominant or non-dominant wrists. The design of the patient-specific dynamic splint is the first step towards assessing whether splinting to induce combined movement is beneficial to the rehabilitation process, compared to conventional treatment. The evaluation of the clinical benefits of this method, compared to conventional rehabilitation methods following wrist fracture, are a part of a PhD work, currently conducted by an occupational therapist.Keywords: distal radius fracture, rehabilitation, dynamic magnetic resonance imaging, dart throw motion
Procedia PDF Downloads 29911 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39210 Establishing Ministerial Social Media Handles for Public Grievances Redressal and Reciprocation System
Authors: Ashish Kumar Dwivedi
Abstract:
Uttar Pradesh is largest part of Indian Federal system encapsulating twenty two million populations and has huge cultural, economic and religious diversity. The newly elected 18 months old state leadership of Uttar Pradesh has envisaged and initiated various proactive strides for the public grievance redressal and inclusive development schemes for all the sections of population from its very day of assumption of the office by Hon’ble Chief Minster Shri Yogi Adtiyanath. These initiatives also include Departmental responses via social media handles as Twitter, Facebook Page, and Web interaction. In the same course, every department of state government has been guided for the correct usage of verified social media handle separately and in co-ordination with other departments. These guidelines included making new WhatsApp groups to connect technocrats and politicians to communicate on common platform. Minister for Department of Infrastructure and Industrial Development, Shri Satish Mahana is a very popular leader and very intuitive statesman, has thousands of followers on social media and his accounts receive almost three hundred individually mentioned notifications from the various parts of Uttar Pradesh. These notifications primarily include problems related to livelihood and grievances concerned to department. To address these communications, a body of five experts has been set who are actively responding on various levels and increase bureaucratic engagements with marginalized sections of society. With reference to above background, this piece of research was conducted to analyze, categorize and derive effective implementation of public policies via social media platforms. This act of responsiveness has brought positive change in the mindset of population for the government, which was missed earlier. Department of Industrial Development in the Government is also inclined to attract investors aiming to become first trillion-dollar economy of India henceforth department also organized two major successful events in last one year. These events were also frame worked on social media platform to update 2.5 million population of state who is actively using social media in many ways. To analyze change scientifically, this study has been conducted and big data has been collected from October 2017 to September 2018 from the departmental social media handles as Twitter, Facebook, and emails. For this data, a statistical study has been conducted to analyze sentiments and expectations, specific and common requirement of communities, nature of grievances and their effective elucidation fitted into government policies. The control sample has also been taken from previous government activities to analyze the change. The statistical study used tools such as correlation study and principal component analysis. Also in this research communication, the modus operandi of grievance redressal, proliferation of government policies, connections to their beneficiaries and quick response procedure will be discussed.Keywords: correlation study, principal component analysis, bureaucratic engagements, social media
Procedia PDF Downloads 1259 Estimated Heat Production, Blood Parameters and Mitochondrial DNA Copy Number of Nellore Bulls with High and Low Residual Feed Intake
Authors: Welder A. Baldassini, Jon J. Ramsey, Marcos R. Chiaratti, Amália S. Chaves, Renata H. Branco, Sarah F. M. Bonilha, Dante P. D. Lanna
Abstract:
With increased production costs there is a need for animals that are more efficient in terms of meat production. In this context, the role of mitochondrial DNA (mtDNA) on physiological processes in liver, muscle and adipose tissues may account for inter-animal variation in energy expenditures and heat production. The purpose this study was to investigate if the amounts of mtDNA in liver, muscle and adipose tissue (subcutaneous and visceral depots) of Nellore bulls are associated with residual feed intake (RFI) and estimated heat production (EHP). Eighteen animals were individually fed in a feedlot for 90 days. RFI values were obtained by regression of dry matter intake (DMI) in relation to average daily gain (ADG) and mid-test metabolic body weight (BW). The animals were classified into low (more efficient) and high (less efficient) RFI groups. The bulls were then randomly distributed in individual pens where they were given excess feed twice daily to result in 5 to 10% orts for 90 d with diet containing 15% crude protein and 2.7 Mcal ME/kg DM. The heart rate (HR) of bulls was monitored for 4 consecutive days and used for calculation of EHP. Electrodes were fitted to bulls with stretch belts (POLAR RS400; Kempele, Finland). To calculate oxygen pulse (O2P), oxygen consumption was obtained using a facemask connected to the gas analyzer (EXHALYZER, ECOMedics, Zurich, Switzerland) and HR were simultaneously measured for 15 minutes period. Daily oxygen (O2) consumption was calculated by multiplying the volume of O2 per beat by total daily beats. EHP was calculated multiplying O2P by the average HR obtained during the 4 days, assuming 4.89 kcal/L of O2 to measure daily EHP that was expressed in kilocalories/day/kilogram metabolic BW (kcal/day/kg BW0.75). Blood samples were collected between days 45 and 90th after the beginning of the trial period in order to measure the concentration of hemoglobin and hematocrit. The bulls were slaughtered in an experimental slaughter house in accordance with current guidelines. Immediately after slaughter, a section of liver, a portion of longissimus thoracis (LT) muscle, plus a portion of subcutaneous fat (surrounding LT muscle) and portions of visceral fat (kidney, pelvis and inguinal fat) were collected. Samples of liver, muscle and adipose tissues were used to quantify mtDNA copy number per cell. The number of mtDNA copies was determined by normalization of mtDNA amount against a single copy nuclear gene (B2M). Mean of EHP, hemoglobin and hematocrit of high and low RFI bulls were compared using two-sample t-tests. Additionally, the one-way ANOVA was used to compare mtDNA quantification considering the mains effects of RFI groups. We found lower EHP (83.047 vs. 97.590 kcal/day/kgBW0.75; P < 0.10), hemoglobin concentration (13.533 vs. 15.108 g/dL; P < 0.10) and hematocrit percentage (39.3 vs. 43.6 %; P < 0.05) in low compared to high RFI bulls, respectively, which may be useful traits to identify efficient animals. However, no differences were observed between the mtDNA content in liver, muscle and adipose tissue of Nellore bulls with high and low RFI.Keywords: bioenergetics, Bos indicus, feed efficiency, mitochondria
Procedia PDF Downloads 2468 Synchrotron Based Techniques for the Characterization of Chemical Vapour Deposition Overgrowth Diamond Layers on High Pressure, High Temperature Substrates
Authors: T. N. Tran Thi, J. Morse, C. Detlefs, P. K. Cook, C. Yıldırım, A. C. Jakobsen, T. Zhou, J. Hartwig, V. Zurbig, D. Caliste, B. Fernandez, D. Eon, O. Loto, M. L. Hicks, A. Pakpour-Tabrizi, J. Baruchel
Abstract:
The ability to grow boron-doped diamond epilayers of high crystalline quality is a prerequisite for the fabrication of diamond power electronic devices, in particular high voltage diodes and metal-oxide-semiconductor (MOS) transistors. Boron and intrinsic diamond layers are homoepitaxially overgrown by microwave assisted chemical vapour deposition (MWCVD) on single crystal high pressure, high temperature (HPHT) grown bulk diamond substrates. Various epilayer thicknesses were grown, with dopant concentrations ranging from 1021 atom/cm³ at nanometer thickness in the case of 'delta doping', up 1016 atom/cm³ and 50µm thickness or high electric field drift regions. The crystalline quality of these overgrown layers as regards defects, strain, distortion… is critical for the device performance through its relation to the final electrical properties (Hall mobility, breakdown voltage...). In addition to the optimization of the epilayer growth conditions in the MWCVD reactor, other important questions related to the crystalline quality of the overgrown layer(s) are: 1) what is the dependence on the bulk quality and surface preparation methods of the HPHT diamond substrate? 2) how do defects already present in the substrate crystal propagate into the overgrown layer; 3) what types of new defects are created during overgrowth, what are their growth mechanisms, and how can these defects be avoided? 4) how can we relate in a quantitative manner parameters related to the measured crystalline quality of the boron doped layer to the electronic properties of final processed devices? We describe synchrotron-based techniques developed to address these questions. These techniques allow the visualization of local defects and crystal distortion which complements the data obtained by other well-established analysis methods such as AFM, SIMS, Hall conductivity…. We have used Grazing Incidence X-ray Diffraction (GIXRD) at the ID01 beamline of the ESRF to study lattice parameters and damage (strain, tilt and mosaic spread) both in diamond substrate near surface layers and in thick (10–50 µm) overgrown boron doped diamond epi-layers. Micro- and nano-section topography have been carried out at both the BM05 and ID06-ESRF) beamlines using rocking curve imaging techniques to study defects which have propagated from the substrate into the overgrown layer(s) and their influence on final electronic device performance. These studies were performed using various commercially sourced HPHT grown diamond substrates, with the MWCVD overgrowth carried out at the Fraunhofer IAF-Germany. The synchrotron results are in good agreement with low-temperature (5°K) cathodoluminescence spectroscopy carried out on the grown samples using an Inspect F5O FESEM fitted with an IHR spectrometer.Keywords: synchrotron X-ray diffaction, crystalline quality, defects, diamond overgrowth, rocking curve imaging
Procedia PDF Downloads 2617 High School Gain Analytics From National Assessment Program – Literacy and Numeracy and Australian Tertiary Admission Rankin Linkage
Authors: Andrew Laming, John Hattie, Mark Wilson
Abstract:
Nine Queensland Independent high schools provided deidentified student-matched ATAR and NAPLAN data for all 1217 ATAR graduates since 2020 who also sat NAPLAN at the school. Graduating cohorts from the nine schools contained a mean 100 ATAR graduates with previous NAPLAN data from their school. Excluded were vocational students (mean=27) and any ATAR graduates without NAPLAN data (mean=20). Based on Index of Community Socio-Educational Access (ICSEA) prediction, all schools had larger that predicted proportions of their students graduating with ATARs. There were an additional 173 students not releasing their ATARs to their school (14%), requiring this data to be inferred by schools. Gain was established by first converting each student’s strongest NAPLAN domain to a statewide percentile, then subtracting this result from final ATAR. The resulting ‘percentile shift’ was corrected for plausible ATAR participation at each NAPLAN level. Strongest NAPLAN domain had the highest correlation with ATAR (R2=0.58). RESULTS School mean NAPLAN scores fitted ICSEA closely (R2=0.97). Schools achieved a mean cohort gain of two ATAR rankings, but only 66% of students gained. This ranged from 46% of top-NAPLAN decile students gaining, rising to 75% achieving gains outside the top decile. The 54% of top-decile students whose ATAR fell short of prediction lost a mean 4.0 percentiles (or 6.2 percentiles prior to correction for regression to the mean). 71% of students in smaller schools gained, compared to 63% in larger schools. NAPLAN variability in each of the 13 ICSEA1100 cohorts was 17%, with both intra-school and inter-school variation of these values extremely low (0.3% to 1.8%). Mean ATAR change between years in each school was just 1.1 ATAR ranks. This suggests consecutive school cohorts and ICSEA-similar schools share very similar distributions and outcomes over time. Quantile analysis of the NAPLAN/ATAR revealed heteroscedasticity, but splines offered little additional benefit over simple linear regression. The NAPLAN/ATAR R2 was 0.33. DISCUSSION Standardised data like NAPLAN and ATAR offer educators a simple no-cost progression metric to analyse performance in conjunction with their internal test results. Change is expressed in percentiles, or ATAR shift per student, which is layperson intuitive. Findings may also reduce ATAR/vocational stream mismatch, reveal proportions of cohorts meeting or falling short of expectation and demonstrate by how much. Finally, ‘crashed’ ATARs well below expectation are revealed, which schools can reasonably work to minimise. The percentile shift method is neither value-add nor a growth percentile. In the absence of exit NAPLAN testing, this metric is unable to discriminate academic gain from legitimate ATAR-maximizing strategies. But by controlling for ICSEA, ATAR proportion variation and student mobility, it uncovers progression to ATAR metrics which are not currently publicly available. However achieved, ATAR maximisation is a sought-after private good. So long as standardised nationwide data is available, this analysis offers useful analytics for educators and reasonable predictivity when counselling subsequent cohorts about their ATAR prospects.Keywords: NAPLAN, ATAR, analytics, measurement, gain, performance, data, percentile, value-added, high school, numeracy, reading comprehension, variability, regression to the mean
Procedia PDF Downloads 686 Degradation of Diclofenac in Water Using FeO-Based Catalytic Ozonation in a Modified Flotation Cell
Authors: Miguel A. Figueroa, José A. Lara-Ramos, Miguel A. Mueses
Abstract:
Pharmaceutical residues are a section of emerging contaminants of anthropogenic origin that are present in a myriad of waters with which human beings interact daily and are starting to affect the ecosystem directly. Conventional waste-water treatment systems are not capable of degrading these pharmaceutical effluents because their designs cannot handle the intermediate products and biological effects occurring during its treatment. That is why it is necessary to hybridize conventional waste-water systems with non-conventional processes. In the specific case of an ozonation process, its efficiency highly depends on a perfect dispersion of ozone, long times of interaction of the gas-liquid phases and the size of the ozone bubbles formed through-out the reaction system. In order to increase the efficiency of these parameters, the use of a modified flotation cell has been proposed recently as a reactive system, which is used at an industrial level to facilitate the suspension of particles and spreading gas bubbles through the reactor volume at a high rate. The objective of the present work is the development of a mathematical model that can closely predict the kinetic rates of reactions taking place in the flotation cell at an experimental scale by means of identifying proper reaction mechanisms that take into account the modified chemical and hydrodynamic factors in the FeO-catalyzed Ozonation of Diclofenac aqueous solutions in a flotation cell. The methodology is comprised of three steps: an experimental phase where a modified flotation cell reactor is used to analyze the effects of ozone concentration and loading catalyst over the degradation of Diclofenac aqueous solutions. The performance is evaluated through an index of utilized ozone, which relates the amount of ozone supplied to the system per milligram of degraded pollutant. Next, a theoretical phase where the reaction mechanisms taking place during the experiments must be identified and proposed that details the multiple direct and indirect reactions the system goes through. Finally, a kinetic model is obtained that can mathematically represent the reaction mechanisms with adjustable parameters that can be fitted to the experimental results and give the model a proper physical meaning. The expected results are a robust reaction rate law that can simulate the improved results of Diclofenac mineralization on water using the modified flotation cell reactor. By means of this methodology, the following results were obtained: A robust reaction pathways mechanism showcasing the intermediates, free-radicals and products of the reaction, Optimal values of reaction rate constants that simulated Hatta numbers lower than 3 for the system modeled, degradation percentages of 100%, TOC (Total organic carbon) removal percentage of 69.9 only requiring an optimal value of FeO catalyst of 0.3 g/L. These results showed that a flotation cell could be used as a reactor in ozonation, catalytic ozonation and photocatalytic ozonation processes, since it produces high reaction rate constants and reduces mass transfer limitations (Ha > 3) by producing microbubbles and maintaining a good catalyst distribution.Keywords: advanced oxidation technologies, iron oxide, emergent contaminants, AOTS intensification
Procedia PDF Downloads 1125 Differential Expression Profile Analysis of DNA Repair Genes in Mycobacterium Leprae by qPCR
Authors: Mukul Sharma, Madhusmita Das, Sundeep Chaitanya Vedithi
Abstract:
Leprosy is a chronic human disease caused by Mycobacterium leprae, that cannot be cultured in vitro. Though treatable with multidrug therapy (MDT), recently, bacteria reported resistance to multiple antibiotics. Targeting DNA replication and repair pathways can serve as the foundation of developing new anti-leprosy drugs. Due to the absence of an axenic culture medium for the propagation of M. leprae, studying cellular processes, especially those belonging to DNA repair pathways, is challenging. Genomic understanding of M. Leprae harbors several protein-coding genes with no previously assigned function known as 'hypothetical proteins'. Here, we report identification and expression of known and hypothetical DNA repair genes from a human skin biopsy and mouse footpads that are involved in base excision repair, direct reversal repair, and SOS response. Initially, a bioinformatics approach was employed based on sequence similarity, identification of known protein domains to screen the hypothetical proteins in the genome of M. leprae, that are potentially related to DNA repair mechanisms. Before testing on clinical samples, pure stocks of bacterial reference DNA of M. leprae (NHDP63 strain) was used to construct standard graphs to validate and identify lower detection limit in the qPCR experiments. Primers were designed to amplify the respective transcripts, and PCR products of the predicted size were obtained. Later, excisional skin biopsies of newly diagnosed untreated, treated, and drug resistance leprosy cases from SIHR & LC hospital, Vellore, India were taken for the extraction of RNA. To determine the presence of the predicted transcripts, cDNA was generated from M. leprae mRNA isolated from clinically confirmed leprosy skin biopsy specimen across all the study groups. Melting curve analysis was performed to determine the integrity of the amplification and to rule out primer‑dimer formation. The Ct values obtained from qPCR were fitted to standard curve to determine transcript copy number. Same procedure was applied for M. leprae extracted after processing a footpad of nude mice of drug sensitive and drug resistant strains. 16S rRNA was used as positive control. Of all the 16 genes involved in BER, DR, and SOS, differential expression pattern of the genes was observed in terms of Ct values when compared to human samples; this was because of the different host and its immune response. However, no drastic variation in gene expression levels was observed in human samples except the nth gene. The higher expression of nth gene could be because of the mutations that may be associated with sequence diversity and drug resistance which suggests an important role in the repair mechanism and remains to be explored. In both human and mouse samples, SOS system – lexA and RecA, and BER genes AlkB and Ogt were expressing efficiently to deal with possible DNA damage. Together, the results of the present study suggest that DNA repair genes are constitutively expressed and may provide a reference for molecular diagnosis, therapeutic target selection, determination of treatment and prognostic judgment in M. leprae pathogenesis.Keywords: DNA repair, human biopsy, hypothetical proteins, mouse footpads, Mycobacterium leprae, qPCR
Procedia PDF Downloads 1034 Modelling Spatial Dynamics of Terrorism
Authors: André Python
Abstract:
To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling
Procedia PDF Downloads 3503 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP
Authors: Yannick Willemin
Abstract:
Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing
Procedia PDF Downloads 962 Application of Large Eddy Simulation-Immersed Boundary Volume Penalization Method for Heat and Mass Transfer in Granular Layers
Authors: Artur Tyliszczak, Ewa Szymanek, Maciej Marek
Abstract:
Flow through granular materials is important to a vast array of industries, for instance in construction industry where granular layers are used for bulkheads and isolators, in chemical engineering and catalytic reactors where large surfaces of packed granular beds intensify chemical reactions, or in energy production systems, where granulates are promising materials for heat storage and heat transfer media. Despite the common usage of granulates and extensive research performed in this field, phenomena occurring between granular solid elements or between solids and fluid are still not fully understood. In the present work we analyze the heat exchange process between the flowing medium (gas, liquid) and solid material inside the granular layers. We consider them as a composite of isolated solid elements and inter-granular spaces in which a gas or liquid can flow. The structure of the layer is controlled by shapes of particular granular elements (e.g., spheres, cylinders, cubes, Raschig rings), its spatial distribution or effective characteristic dimension (total volume or surface area). We will analyze to what extent alteration of these parameters influences on flow characteristics (turbulent intensity, mixing efficiency, heat transfer) inside the layer and behind it. Analysis of flow inside granular layers is very complicated because the use of classical experimental techniques (LDA, PIV, fibber probes) inside the layers is practically impossible, whereas the use of probes (e.g. thermocouples, Pitot tubes) requires drilling of holes inside the solid material. Hence, measurements of the flow inside granular layers are usually performed using for instance advanced X-ray tomography. In this respect, theoretical or numerical analyses of flow inside granulates seem crucial. Application of discrete element methods in combination with the classical finite volume/finite difference approaches is problematic as a mesh generation process for complex granular material can be very arduous. A good alternative for simulation of flow in complex domains is an immersed boundary-volume penalization (IB-VP) in which the computational meshes have simple Cartesian structure and impact of solid objects on the fluid is mimicked by source terms added to the Navier-Stokes and energy equations. The present paper focuses on application of the IB-VP method combined with large eddy simulation (LES). The flow solver used in this work is a high-order code (SAILOR), which was used previously in various studies, including laminar/turbulent transition in free flows and also for flows in wavy channels, wavy pipes and over various shape obstacles. In these cases a formal order of approximation turned out to be in between 1 and 2, depending on the test case. The current research concentrates on analyses of the flows in dense granular layers with elements distributed in a deterministic regular manner and validation of the results obtained using LES-IB method and body-fitted approach. The comparisons are very promising and show very good agreement. It is found that the size, number of elements and their distribution have huge impact on the obtained results. Ordering of the granular elements (or lack of it) affects both the pressure drop and efficiency of the heat transfer as it significantly changes mixing process.Keywords: granular layers, heat transfer, immersed boundary method, numerical simulations
Procedia PDF Downloads 1361 A Multi-Scale Approach to Space Use: Habitat Disturbance Alters Behavior, Movement and Energy Budgets in Sloths (Bradypus variegatus)
Authors: Heather E. Ewart, Keith Jensen, Rebecca N. Cliffe
Abstract:
Fragmentation and changes in the structural composition of tropical forests – as a result of intensifying anthropogenic disturbance – are increasing pressures on local biodiversity. Species with low dispersal abilities have some of the highest extinction risks in response to environmental change, as even small-scale environmental variation can substantially impact their space use and energetic balance. Understanding the implications of forest disturbance is therefore essential, ultimately allowing for more effective and targeted conservation initiatives. Here, the impact of different levels of forest disturbance on the space use, energetics, movement and behavior of 18 brown-throated sloths (Bradypus variegatus) were assessed in the South Caribbean of Costa Rica. A multi-scale framework was used to measure forest disturbance, including large-scale (landscape-level classifications) and fine-scale (within and surrounding individual home ranges) forest composition. Three landscape-level classifications were identified: primary forests (undisturbed), secondary forests (some disturbance, regenerating) and urban forests (high levels of disturbance and fragmentation). Finer-scale forest composition was determined using measurements of habitat structure and quality within and surrounding individual home ranges for each sloth (home range estimates were calculated using autocorrelated kernel density estimation [AKDE]). Measurements of forest quality included tree connectivity, density, diameter and height, species richness, and percentage of canopy cover. To determine space use, energetics, movement and behavior, six sloths in urban forests, seven sloths in secondary forests and five sloths in primary forests were tracked using a combination of Very High Frequency (VHF) radio transmitters and Global Positioning System (GPS) technology over an average period of 120 days. All sloths were also fitted with micro data-loggers (containing tri-axial accelerometers and pressure loggers) for an average of 30 days to allow for behavior-specific movement analyses (data analysis ongoing for data-loggers and primary forest sloths). Data-loggers included determination of activity budgets, circadian rhythms of activity and energy expenditure (using the vector of the dynamic body acceleration [VeDBA] as a proxy). Analyses to date indicate that home range size significantly increased with the level of forest disturbance. Female sloths inhabiting secondary forests averaged 0.67-hectare home ranges, while female sloths inhabiting urban forests averaged 1.93-hectare home ranges (estimates are represented by median values to account for the individual variation in home range size in sloths). Likewise, home range estimates for male sloths were 2.35 hectares in secondary forests and 4.83 in urban forests. Sloths in urban forests also used nearly double (median = 22.5) the number of trees as sloths in the secondary forest (median = 12). These preliminary data indicate that forest disturbance likely heightens the energetic requirements of sloths, a species already critically limited by low dispersal ability and rates of energy acquisition. Energetic and behavioral analyses from the data-loggers will be considered in the context of fine-scale forest composition measurements (i.e., habitat quality and structure) and are expected to reflect the observed home range and movement constraints. The implications of these results are far-reaching, presenting an opportunity to define a critical index of habitat connectivity for low dispersal species such as sloths.Keywords: biodiversity conservation, forest disturbance, movement ecology, sloths
Procedia PDF Downloads 113