Search results for: fractional–order proportional integral (FOPI) controller
14545 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons
Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung
Abstract:
Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.Keywords: TEPC, lineal energy, microdosimetry, radiation quality
Procedia PDF Downloads 47014544 Resilient Environments vs. Resilient Architects: Creativity, Practice and Education
Authors: Y. Perera, M. Pathiraja
Abstract:
Within the paradigm of 'Resilient Built-environments,' in order for architecture to be resilient, 'Resilience' should be identified as an essential component of the architect’s notion of creativity. In much simpler terms, 'Resilient Built-Environment' should necessarily be a by-product of the 'Resilient Architect.' The inherent influence of individualistic notions of creativity upon the practice had intensified the dichotomy between theory and practice unless the notion of 'Resilience' is identified as an integral component of the architect’s notion of creativity. Analysing the architectural position is an ideal way of understanding the architect’s notion of creativity, therefore, in exploring the notion of 'Resilience' and the 'Resilient Architect' within the Sri Lankan platform, the architectural positions of two renowned architects; Geoffrey Bawa and Valentine Gunasekara were explored and analysed. The architectural positions of both the architects asserted specific rules and methodologies adopted within the process of problem solving that had subsequently led to a traceable language / pattern within their architecture. The dominance of such rules within the practice could be detrimental to adaptation of theories / notions, such as 'Resilience' and the formation of the 'Resilient Architect', unless methodologies itself are flexible, robust, despite rigidity, or else the notion of 'Resilience' exist in the form of a methodological rule.Keywords: architectural position, creativity, education, practice, resilience, theory
Procedia PDF Downloads 31714543 Influence of Non-Formal Physical Education Curriculum, Based on Olympic Pedagogy, for 11-13 Years Old Children Physical Development
Authors: Asta Sarkauskiene
Abstract:
The pedagogy of Olympic education is based upon the main idea of P. de Coubertin, that physical education can and has to support the education of the perfect person, the one who was an aspiration in archaic Greece, when it was looking towards human as a one whole, which is composed of three interconnected functions: physical, psychical and spiritual. The following research question was formulated in the present study: What curriculum of non-formal physical education in school can positively influence physical development of 11-13 years old children? The aim of this study was to formulate and implement curriculum of non-formal physical education, based on Olympic pedagogy, and assess its effectiveness for physical development of 11-13 years old children. The research was conducted in two stages. In the first stage 51 fifth grade children (Mage = 11.3 years) participated in a quasi-experiment for two years. Children were organized into 2 groups: E and C. Both groups shared the duration (1 hour) and frequency (twice a week) but were different in their education curriculum. Experimental group (E) worked under the program developed by us. Priorities of the E group were: training of physical powers in unity with psychical and spiritual powers; integral growth of physical development, physical activity, physical health, and physical fitness; integration of children with lower health and physical fitness level; content that corresponds children needs, abilities, physical and functional powers. Control group (C) worked according to NFPE programs prepared by teachers and approved by school principal and school methodical group. Priorities of the C group were: motion actions teaching and development; physical qualities training; training of the most physically capable children. In the second stage (after four years) 72 sixth graders (Mage = 13.00) attended in the research from the same comprehensive schools. Children were organized into first and second groups. The curriculum of the first group was modified and the second - the same as group C. The focus groups conducted anthropometric (height, weight, BMI) and physiometric (VC, right and left handgrip strength) measurements. Dependent t test indicated that over two years E and C group girls and boys height, weight, right and left handgrip strength indices increased significantly, p < 0.05. E group girls and boys BMI indices did not change significantly, p > 0.05, i.e. height and weight ratio of girls, who participated in NFPE in school, became more proportional. C group girls VC indices did not differ significantly, p > 0.05. Independent t test indicated that in the first and second research stage differences of anthropometric and physiometric measurements of the groups are not significant, p > 0.05. Formulated and implemented curriculum of non-formal education in school, based on olympic pedagogy, had the biggest positive influence on decreasing 11-13 years old children level of BMI and increasing level of VC.Keywords: non – formal physical education, olympic pedagogy, physical development, health sciences
Procedia PDF Downloads 56414542 A Continuous Boundary Value Method of Order 8 for Solving the General Second Order Multipoint Boundary Value Problems
Authors: T. A. Biala
Abstract:
This paper deals with the numerical integration of the general second order multipoint boundary value problems. This has been achieved by the development of a continuous linear multistep method (LMM). The continuous LMM is used to construct a main discrete method to be used with some initial and final methods (also obtained from the continuous LMM) so that they form a discrete analogue of the continuous second order boundary value problems. These methods are used as boundary value methods and adapted to cope with the integration of the general second order multipoint boundary value problems. The convergence, the use and the region of absolute stability of the methods are discussed. Several numerical examples are implemented to elucidate our solution process.Keywords: linear multistep methods, boundary value methods, second order multipoint boundary value problems, convergence
Procedia PDF Downloads 37714541 Speech Anxiety in Higher Education Students-Retention of an Ancestral Trait: A Study into the Students' Perspective of Communication Anxiety with Suggestions on How to Minimise Student Distress
Authors: Paul D. Facey, Claire Morgan
Abstract:
Speech anxiety is thought to be deep-seated within the human evolutionary lineage.As a result, almost all people display high levels of anxiety when asked to communicate in front of an audience.However, proficiency in oral communication is considered as an essential skill for a graduate career and significant emphasis is placed on developing these skills in many degree programs.Because of this, many degree schemes incorporate some form of assessed dialogic presentation. Yet, a student’s anxiety over public speaking, especially if severe, can be so great that at worst it can cause the student to withdraw from their study. This study investigated how students perceive their own levels of anxiety when faced with public speaking using the Personal Report of Public Speaking Anxiety (PRPSA) questionnaire developed by McCroskey. Additionally, students were asked to provide examples of adjustments that could be implemented that they felt would alleviate some/all of their anxiety. The results of the study indicated that the majority of the students experienced a moderate level of anxiety. However, further analysis showed that of those who were in the moderate anxiety’ group, 43% fell into the higher range suggesting that overall more students experience higher levels of anxiety when faced with public speaking than maybe first envisaged. Thus, it is essential that steps are taken to address student anxiety in order that students engage with presentations, are motivated and encouraged and do not avoid such assignments. The feedback from our students indicated a need to implement systematic desensitization programs where students learn to overcome their anxiety through a series of sessions that gradually increase their anxiety levels. Furthermore, these sessions should be run in parallel with skills sessions in order for students to be better prepared and allow self-reflection and self-analysis.This study highlights the paucity of these sessions on many degree schemes and suggests that they should form an integral part of a students’ early academic learning.Keywords: student anxiety, communication anxiety, public speaking, higher education, desensitisation
Procedia PDF Downloads 25314540 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 40214539 Downside Risk Analysis of the Nigerian Stock Market: A Value at Risk Approach
Authors: Godwin Chigozie Okpara
Abstract:
This paper using standard GARCH, EGARCH, and TARCH models on day of the week return series (of 246 days) from the Nigerian Stock market estimated the model variants’ VaR. An asymmetric return distribution and fat-tail phenomenon in financial time series were considered by estimating the models with normal, student t and generalized error distributions. The analysis based on Akaike Information Criterion suggests that the EGARCH model with student t innovation distribution can furnish more accurate estimate of VaR. In the light of this, we apply the likelihood ratio tests of proportional failure rates to VaR derived from EGARCH model in order to determine the short and long positions VaR performances. The result shows that as alpha ranges from 0.05 to 0.005 for short positions, the failure rate significantly exceeds the prescribed quintiles while it however shows no significant difference between the failure rate and the prescribed quantiles for long positions. This suggests that investors and portfolio managers in the Nigeria stock market have long trading position or can buy assets with concern on when the asset prices will fall. Precisely, the VaR estimates for the long position range from -4.7% for 95 percent confidence level to -10.3% for 99.5 percent confidence level.Keywords: downside risk, value-at-risk, failure rate, kupiec LR tests, GARCH models
Procedia PDF Downloads 44414538 Life Time Improvement of Clamp Structural by Using Fatigue Analysis
Authors: Pisut Boonkaew, Jatuporn Thongsri
Abstract:
In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability
Procedia PDF Downloads 23514537 Lipase-Catalyzed Synthesis of Novel Nutraceutical Structured Lipids in Non-Conventional Media
Authors: Selim Kermasha
Abstract:
A process for the synthesis of structured lipids (SLs) by the lipase-catalyzed interesterification of selected endogenous edible oils such as flaxseed oil (FO) and medium-chain triacylglyceols such as tricaprylin (TC) in non-conventional media (NCM), including organic solvent media (OSM) and solvent-free medium (SFM), was developed. The bioconversion yield of the medium-long-medium-type SLs (MLM-SLs were monitored as the responses with use of selected commercial lipases. In order to optimize the interesterification reaction and to establish a model system, a wide range of reaction parameters, including TC to FO molar ratio, reaction temperature, enzyme concentration, reaction time, agitation speed and initial water activity, were investigated to establish the a model system. The model system was monitored with the use of multiple response surface methodology (RSM) was used to obtain significant models for the responses and to optimize the interesterification reaction, on the basis of selected levels and variable fractional factorial design (FFD) with centre points. Based on the objective of each response, the appropriate level combination of the process parameters and the solutions that met the defined criteria were also provided by means of desirability function. The synthesized novel molecules were structurally characterized, using silver-ion reversed-phase high-performance liquid chromatography (RP-HPLC) atmospheric pressure chemical ionization-mass spectrophotometry (APCI-MS) analyses. The overall experimental findings confirmed the formation of dicaprylyl-linolenyl glycerol, dicaprylyl-oleyl glycerol and dicaprylyl-linoleyl glycerol resulted from the lipase-catalyzed interesterification of FO and TC.Keywords: enzymatic interesterification, non-conventinal media, nutraceuticals, structured lipids
Procedia PDF Downloads 29714536 Visual Servoing for Quadrotor UAV Target Tracking: Effects of Target Information Sharing
Authors: Jason R. King, Hugh H. T. Liu
Abstract:
This research presents simulation and experimental work in the visual servoing of a quadrotor Unmanned Aerial Vehicle (UAV) to stabilize overtop of a moving target. Most previous work in the field assumes static or slow-moving, unpredictable targets. In this experiment, the target is assumed to be a friendly ground robot moving freely on a horizontal plane, which shares information with the UAV. This information includes velocity and acceleration information of the ground target to aid the quadrotor in its tracking task. The quadrotor is assumed to have a downward-facing camera which is fixed to the frame of the quadrotor. Only onboard sensing for the quadrotor is utilized for the experiment, with a VICON motion capture system in place used only to measure ground truth and evaluate the performance of the controller. The experimental platform consists of an ArDrone 2.0 and a Create Roomba, communicating using Robot Operating System (ROS). The addition of the target’s information is demonstrated to help the quadrotor in its tracking task using simulations of the dynamic model of a quadrotor in Matlab Simulink. A nested PID control loop is utilized for inner-loop control the quadrotor, similar to previous works at the Flight Systems and Controls Laboratory (FSC) at the University of Toronto Institute for Aerospace Studies (UTIAS). Experiments are performed with ground truth provided by an indoor motion capture system, and the results are analyzed. It is demonstrated that a velocity controller which incorporates the additional information is able to perform better than the controllers which do not have access to the target’s information.Keywords: quadrotor, target tracking, unmanned aerial vehicle, UAV, UAS, visual servoing
Procedia PDF Downloads 34214535 Motion Capture Based Wizard of Oz Technique for Humanoid Robot
Authors: Rafal Stegierski, Krzysztof Dmitruk
Abstract:
The paper focuses on robotic tele-presence system build around humanoid robot operated with controller-less Wizard of Oz technique. Proposed solution gives possibility to quick start acting as a operator with short, if any, initial training.Keywords: robotics, motion capture, Wizard of Oz, humanoid robots, human robot interaction
Procedia PDF Downloads 48114534 Adaptive CFAR Analysis for Non-Gaussian Distribution
Authors: Bouchemha Amel, Chachoui Takieddine, H. Maalem
Abstract:
Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution.Keywords: CFAR, threshold, clutter, distribution, Weibull, detection
Procedia PDF Downloads 58914533 Seasonal Prevalence of Gastrointestinal Parasites and Their Association with Trace Element Contents in Sera of Sheep, Grazing Forages and Soils of Sialkot District, Punjab, Pakistan
Authors: Hafiz M. Rizwan, Muhammad S. Sajid, Zafar Iqbal, Muhammad Saqib
Abstract:
Gastro-intestinal (GI) helminths infection in sheep causes a substantial loss in terms of productivity and constitutes serious economic losses in the world. Different types of forages are rich in trace element contents and may act as a natural resource to improve the trace element deficiencies leading to immunity boost-up in general and against gastrointestinal parasitic infections in particular. In the present study, the level of trace elements (Cu, Co, Mn, Zn) determined in sera of different breeds of sheep, available feedstuffs, respective soil samples and their association with GI helminths in Sialkot district, Punjab, Pakistan. Almost similar prevalence of GI helminths was recorded (32.81%) during spring 2015 and (32.55%) during autumn 2014. The parasitic species identified from the microscopically scanned faecal samples of district Sialkot were Fasciola (F.) hepatica, F. gigantica, Haemonchus contortus, Eimeria crandallis, Gongylonema pulchrum, Oesophagostomum sp., Trichuris ovis, Strongyles sp., Cryptosporidium sp. and Trichostrongylus sp. Among variables like age, sex, and breed, only sex was found significant in district Sialkot. A significant (P < 0.05) variation in the concentration of Zn, Cu, Mn, and Co was recorded in collected forages species. Soils of grazing field showed insignificant (P > 0.05) variation among soils of different tehsils of Sialkot district. Statistically, sera of sheep showed no variation (P > 0.05) during autumn 2014, While, variation (P < 0.05) among different tehsils of Sialkot district during spring 2015 except Co. During autumn 2014 the mean concentration of Cu, Zn, and Co in sera was inversely proportional to the mean EPG of sheep while during spring 2015 only Zn was inversely proportional to the mean EPG of sheep. The trace element-rich forages preferably Zn were effective ones against helminths infection. The trace element-rich forages will be recommended for their utilization as an alternate to improve the trace element deficiencies in sheep which ultimately boost up the immunity against gastrointestinal parasitic infections.Keywords: coprological examination, gastro-intestinal parasites, prevalence, sheep, trace elements
Procedia PDF Downloads 34614532 Modelling Optimal Control of Diabetes in the Workplace
Authors: Eunice Christabel Chukwu
Abstract:
Introduction: Diabetes is a chronic medical condition which is characterized by high levels of glucose in the blood and urine; it is usually diagnosed by means of a glucose tolerance test (GTT). Diabetes can cause a range of health problems if left unmanaged, as it can lead to serious complications. It is essential to manage the condition effectively, particularly in the workplace where the impact on work productivity can be significant. This paper discusses the modelling of optimal control of diabetes in the workplace using a control theory approach. Background: Diabetes mellitus is a condition caused by too much glucose in the blood. Insulin, a hormone produced by the pancreas, controls the blood sugar level by regulating the production and storage of glucose. In diabetes, there may be a decrease in the body’s ability to respond to insulin or a decrease in insulin produced by the pancreas which will lead to abnormalities in the metabolism of carbohydrates, proteins, and fats. In addition to the health implications, the condition can also have a significant impact on work productivity, as employees with uncontrolled diabetes are at risk of absenteeism, reduced performance, and increased healthcare costs. While several interventions are available to manage diabetes, the most effective approach is to control blood glucose levels through a combination of lifestyle modifications and medication. Methodology: The control theory approach involves modelling the dynamics of the system and designing a controller that can regulate the system to achieve optimal performance. In the case of diabetes, the system dynamics can be modelled using a mathematical model that describes the relationship between insulin, glucose, and other variables. The controller can then be designed to regulate the glucose levels to maintain them within a healthy range. Results: The modelling of optimal control of diabetes in the workplace using a control theory approach has shown promising results. The model has been able to predict the optimal dose of insulin required to maintain glucose levels within a healthy range, taking into account the individual’s lifestyle, medication regimen, and other relevant factors. The approach has also been used to design interventions that can improve diabetes management in the workplace, such as regular glucose monitoring and education programs. Conclusion: The modelling of optimal control of diabetes in the workplace using a control theory approach has significant potential to improve diabetes management and work productivity. By using a mathematical model and a controller to regulate glucose levels, the approach can help individuals with diabetes to achieve optimal health outcomes while minimizing the impact of the condition on their work performance. Further research is needed to validate the model and develop interventions that can be implemented in the workplace.Keywords: mathematical model, blood, insulin, pancreas, model, glucose
Procedia PDF Downloads 6314531 Analysis of Evolution of Higher Order Solitons by Numerical Simulation
Authors: K. Khadidja
Abstract:
Solitons are stable solution of nonlinear Schrodinger equation. Their stability is due to the exact combination between nonlinearity and dispersion which causes pulse broadening. Higher order solitons are born when nonlinear length is N multiple of dispersive length. Soliton order is determined by the number N itself. In this paper, evolution of higher order solitons is illustrated by simulation using Matlab. Results show that higher order solitons change their shape periodically, the reason why they are bad for transmission comparing to fundamental solitons which are constant. Partial analysis of a soliton of higher order explains that the periodic shape is due to the interplay between nonlinearity and dispersion which are not equal during a period. This class of solitons has many applications such as generation of supercontinuum and the impulse compression on the Femtosecond scale. As a conclusion, the periodicity which is harmful to transmission can be beneficial in other applications.Keywords: dispersion, nonlinearity, optical fiber, soliton
Procedia PDF Downloads 16814530 Brief Inquisition of Photocatalytic Degradation of Azo Dyes by Magnetically Enhanced Zinc Oxide Nanoparticles
Authors: Thian Khoon Tan, Poi Sim Khiew, Wee Siong Chiu, Chin Hua Chia
Abstract:
This study investigates the efficacy of magnetically enhanced zinc oxide (MZnO) nanoparticles as a photocatalyst in the photodegradation of synthetic dyes, especially azo dyes. This magnetised zinc oxide has been simply fabricated by mechanical mixing through low-temperature calcination. This MZnO has been analysed through several analytical measurements, including FESEM, XRD, BET, EDX, and TEM, as well as VSM analysis which reflects successful fabrication. A high volume of azo dyes was found in industries effluent wastewater. They contribute to serious environmental stability and are very harmful to human health due to their high stability and carcinogenic properties. Therefore, five azo dyes, Reactive Red 120 (RR120), Disperse Blue 15 (DB15), Acid Brown 14 (AB14), Orange G (OG), and Acid Orange 7 (AO7), have been randomly selected to study their photodegradation property with reference to few characteristics, such as number of azo functional groups, benzene groups, molecular mass, and absorbance. The photocatalytic degradation efficiency was analysed by using a UV-vis spectrophotometer, where the reaction rate constant was obtained. It was found that azo dyes were significantly degraded through the first-order rate constant, which shows a higher kinetic constant as the number of azo functional groups and benzene group increases. However, the kinetic constant is inversely proportional to the molecular weight of these azo dyes.Keywords: nanoparticles, photocatalyst, magnetically enhanced, wastewater, synthetic dyes, azo dyes
Procedia PDF Downloads 1914529 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 31114528 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus
Procedia PDF Downloads 22014527 Output-Feedback Control Design for a General Class of Systems Subject to Sampling and Uncertainties
Authors: Tomas Menard
Abstract:
The synthesis of output-feedback control law has been investigated by many researchers since the last century. While many results exist for the case of Linear Time Invariant systems whose measurements are continuously available, nowadays, control laws are usually implemented on micro-controller, then the measurements are discrete-time by nature. This fact has to be taken into account explicitly in order to obtain a satisfactory behavior of the closed-loop system. One considers here a general class of systems corresponding to an observability normal form and which is subject to uncertainties in the dynamics and sampling of the output. Indeed, in practice, the modeling of the system is never perfect, this results in unknown uncertainties in the dynamics of the model. We propose here an output feedback algorithm which is based on a linear state feedback and a continuous-discrete time observer. The main feature of the proposed control law is that only discrete-time measurements of the output are needed. Furthermore, it is formally proven that the state of the closed loop system exponentially converges toward the origin despite the unknown uncertainties. Finally, the performances of this control scheme are illustrated with simulations.Keywords: dynamical systems, output feedback control law, sampling, uncertain systems
Procedia PDF Downloads 28614526 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)
Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara
Abstract:
Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry
Procedia PDF Downloads 17714525 Implementing Building Information Modelling to Attain Lean and Green Benefits
Authors: Ritu Ahuja
Abstract:
Globally the built environment sector is striving to be highly efficient, quality-centred and socially-responsible. Built environment sector is an integral part of the economy and plays an important role in urbanization, industrialization and improved quality of living. The inherent challenges such as excessive material and process waste, over reliance on resources, energy usage, and carbon footprint need to be addressed in order to meet the needs of the economy. It is envisioned that these challenges can be resolved by integration of Lean-Green-Building Information Modelling (BIM) paradigms. Ipso facto, with BIM as a catalyst, this research identifies the operational and tactical connections of lean and green philosophies by providing a conceptual integration framework and underpinning theories. The research has developed a framework for BIM-based organizational capabilities for enhanced adoption and effective use of BIM within architectural organizations. The study was conducted through a sequential mixed method approach focusing on collecting and analyzing both qualitative and quantitative data. The framework developed as part of this study will enable architectural organizations to successfully embrace BIM on projects and gain lean and green benefits.Keywords: BIM, lean, green, AEC organizations
Procedia PDF Downloads 18914524 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator
Procedia PDF Downloads 9714523 Design and Implementation of Control System in Underwater Glider of Ganeshblue
Authors: Imam Taufiqurrahman, Anugrah Adiwilaga, Egi Hidayat, Bambang Riyanto Trilaksono
Abstract:
Autonomous Underwater Vehicle glider is one of the renewal of underwater vehicles. This vehicle is one of the autonomous underwater vehicles that are being developed in Indonesia. Glide ability is obtained by controlling the buoyancy and attitude of the vehicle using the movers within the vehicle. The glider motion mechanism is expected to provide energy resistance from autonomous underwater vehicles so as to increase the cruising range of rides while performing missions. The control system on the vehicle consists of three parts: controlling the attitude of the pitch, the buoyancy engine controller and the yaw controller. The buoyancy and pitch controls on the vehicle are sequentially referring to the finite state machine with pitch angle and depth of diving inputs to obtain a gliding cycle. While the yaw control is done through the rudder for the needs of the guide system. This research is focused on design and implementation of control system of Autonomous Underwater Vehicle glider based on PID anti-windup. The control system is implemented on an ARM TS-7250-V2 device along with a mathematical model of the vehicle in MATLAB using the hardware-in-the-loop simulation (HILS) method. The TS-7250-V2 is chosen because it complies industry standards, has high computing capability, minimal power consumption. The results show that the control system in HILS process can form glide cycle with depth and angle of operation as desired. In the implementation using half control and full control mode, from the experiment can be concluded in full control mode more precision when tracking the reference. While half control mode is considered more efficient in carrying out the mission.Keywords: control system, PID, underwater glider, marine robotics
Procedia PDF Downloads 37414522 Analysis of Lead Time Delays in Supply Chain: A Case Study
Authors: Abdel-Aziz M. Mohamed, Nermeen Coutry
Abstract:
Lead time is an important measure of supply chain performance. It impacts both customer satisfactions as well as the total cost of inventory. This paper presents the result of a study on the analysis of the customer order lead-time for a multinational company. In the study, the lead time was divided into three stages: order entry, order fulfillment, and order delivery. A sample of size 2,425 order lines from the company records were considered for this study. The sample data includes information regarding customer orders from the time of order entry until order delivery. Data regarding the lead time of each sage for different orders were also provided. Summary statistics on lead time data reveals that about 30% of the orders were delivered after the scheduled due date. The result of the multiple linear regression analysis technique revealed that component type, logistics parameter, order size and the customer type have significant impact on lead time. Data analysis on the stages of lead time indicates that stage 2 consumes over 50% of the lead time. Pareto analysis was made to study the reasons for the customer order delay in each of the 3 stages. Recommendation was given to resolve the problem.Keywords: lead time reduction, customer satisfaction, service quality, statistical analysis
Procedia PDF Downloads 73314521 The Application of to Optimize Pellet Quality in Broiler Feeds
Authors: Reza Vakili
Abstract:
The aim of this experiment was to optimize the effect of moisture, the production rate, grain particle size and steam conditioning temperature on pellet quality in broiler feed using Taguchi method and a 43 fractional factorial arrangement was conducted. Production rate, steam conditioning temperatures, particle sizes and moisture content were performed. During the production process, sampling was done, and then pellet durability index (PDI) and hardness evaluated in broiler feed grower and finisher. There was a significant effect of processing parameters on PDI and hardness. Based on the results of this experiment Taguchi method can be used to find the best combination of factors for optimal pellet quality.Keywords: broiler, feed physical quality, hardness, processing parameters, PDI
Procedia PDF Downloads 18814520 University of Bejaia, Algeria
Authors: Geoffrey Sinha
Abstract:
Today’s students are connected to the digital generation and technology is an integral part of their everyday lives. Clearly, this is one social revolution that is here to stay and the language classroom has been no exception. Furthermore, today’s teachers are also expected to connect with technology and online tools in their curriculum. However, it’s often difficult for teachers to know where to start, what resources and tools are available, what students should use, and most importantly, how to effectively use them in the classroom.Keywords: language learning, new media, social media, technology
Procedia PDF Downloads 46514519 Control of an SIR Model for Basic Reproduction Number Regulation
Authors: Enrique Barbieri
Abstract:
The basic disease-spread model described by three states denoting the susceptible (S), infectious (I), and removed (recovered and deceased) (R) sub-groups of the total population N, or SIR model, has been considered. Heuristic mitigating action profiles of the pharmaceutical and non-pharmaceutical types may be developed in a control design setting for the purpose of reducing the transmission rate or improving the recovery rate parameters in the model. Even though the transmission and recovery rates are not control inputs in the traditional sense, a linear observer and feedback controller can be tuned to generate an asymptotic estimate of the transmission rate for a linearized, discrete-time version of the SIR model. Then, a set of mitigating actions is suggested to steer the basic reproduction number toward unity, in which case the disease does not spread, and the infected population state does not suffer from multiple waves. The special case of piecewise constant transmission rate is described and applied to a seventh-order SEIQRDP model, which segments the population into four additional states. The offline simulations in discrete time may be used to produce heuristic policies implemented by public health and government organizations.Keywords: control of SIR, observer, SEIQRDP, disease spread
Procedia PDF Downloads 11214518 Development of Mobile Application for Internship Program Management Using the Concept of Model View Controller (MVC) Pattern
Authors: Shutchapol Chopvitayakun
Abstract:
Nowadays, especially for the last 5 years, mobile devices, mobile applications and mobile users, through the deployment of wireless communication and mobile phone cellular network, all these components are growing significantly bigger and stronger. They are being integrated into each other to create multiple purposes and pervasive deployments into every business and non-business sector such as education, medicine, traveling, finance, real estate and many more. Objective of this study was to develop a mobile application for seniors or last-year students who enroll the internship program at each tertiary school (undergraduate school) and do onsite practice at real field sties, real organizations and real workspaces. During the internship session, all students as the interns are required to exercise, drilling and training onsite with specific locations and specific tasks or may be some assignments from their supervisor. Their work spaces are both private and government corporates and enterprises. This mobile application is developed under schema of a transactional processing system that enables users to keep daily work or practice log, monitor true working locations and ability to follow daily tasks of each trainee. Moreover, it provides useful guidance from each intern’s advisor, in case of emergency. Finally, it can summarize all transactional data then calculate each internship cumulated hours from the field practice session for each individual intern.Keywords: internship, mobile application, Android OS, smart phone devices, mobile transactional processing system, guidance and monitoring, tertiary education, senior students, model view controller (MVC)
Procedia PDF Downloads 31514517 Double Clustering as an Unsupervised Approach for Order Picking of Distributed Warehouses
Authors: Hsin-Yi Huang, Ming-Sheng Liu, Jiun-Yan Shiau
Abstract:
Planning the order picking lists of warehouses to achieve when the costs associated with logistics on the operational performance is a significant challenge. In e-commerce era, this task is especially important productive processes are high. Nowadays, many order planning techniques employ supervised machine learning algorithms. However, the definition of which features should be processed by such algorithms is not a simple task, being crucial to the proposed technique’s success. Against this background, we consider whether unsupervised algorithms can enhance the planning of order-picking lists. A Zone2 picking approach, which is based on using clustering algorithms twice, is developed. A simplified example is given to demonstrate the merit of our approach.Keywords: order picking, warehouse, clustering, unsupervised learning
Procedia PDF Downloads 16014516 Soliton Solutions of the Higher-Order Nonlinear Schrödinger Equation with Dispersion Effects
Authors: H. Triki, Y. Hamaizi, A. El-Akrmi
Abstract:
We consider the higher order nonlinear Schrödinger equation model with fourth-order dispersion, cubic-quintic terms, and self-steepening. This equation governs the propagation of fem to second pulses in optical fibers. We present new bright and dark solitary wave type solutions for such a model under certain parametric conditions. This kind of solution may be useful to explain some physical phenomena related to wave propagation in a nonlinear optical fiber systems supporting high-order nonlinear and dispersive effects.Keywords: nonlinear Schrödinger equation, high-order effects, soliton solution
Procedia PDF Downloads 636