Search results for: the doctrine of equivalent
283 Evaluating the Dosimetric Performance for 3D Treatment Planning System for Wedged and Off-Axis Fields
Authors: Nashaat A. Deiab, Aida Radwan, Mohamed S. Yahiya, Mohamed Elnagdy, Rasha Moustafa
Abstract:
This study is to evaluate the dosimetric performance of our institution's 3D treatment planning system for wedged and off-axis 6MV photon beams, guided by the recommended QA tests documented in the AAPM TG53; NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Ten tests were applied on solid water equivalent phantom along with 2D array dose detection system. The calculated doses using 3D treatment planning system PrecisePLAN were compared with measured doses to make sure that the dose calculations are accurate for simple situations such as square and elongated fields, different SSD, beam modifiers e.g. wedges, blocks, MLC-shaped fields and asymmetric collimator settings. The QA results showed dosimetric accuracy of the TPS within the specified tolerance limits. Except for large elongated wedged field, the central axis and outside central axis have errors of 0.2% and 0.5%, respectively, and off- planned and off-axis elongated fields the region outside the central axis of the beam errors are 0.2% and 1.1%, respectively. The dosimetric investigated results yielded differences within the accepted tolerance level as recommended. Differences between dose values predicted by the TPS and measured values at the same point are the result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.Keywords: quality assurance, dose calculation, wedged fields, off-axis fields, 3D treatment planning system, photon beam
Procedia PDF Downloads 446282 Assessment of Some Biological Activities of Methanolic Crude Extract from Polygonum maritimum L.
Authors: Imad Abdelhamid El-Haci, Wissame Mazari, Fayçal Hassani, Fawzia Atik Bekkara
Abstract:
Much attention has been paid to the antioxidants, which are expected to prevent food and living systems from peroxidative damage. Incorporation of synthetic antioxidants in food products is under strict regulation due to the potential health hazards caused by such compounds. The use of plants as traditional health remedies is very popular and important for 80% of the world’s population in African, Asian, Latin America and Middle Eastern Countries. Their use is reported to have minimal side effects. In recent years, pharmaceutical companies have spent considerable time and money in developing therapeutics based upon natural products extracted from plants. In other part, due to the continuous emergence of antibiotic-resistant strains there is continual demand for new antibiotics. Chemical compounds from medicinal plant especially are targeted by many researches. In this light, genus Polygonum (Polygonaceae), comprising about 45 genera (300 species), is distributed worldwide, mostly in north temperate regions. They have been reported to have uses in traditional medicine, such as anti-inflammation, promoting blood circulation, dysentery, diuretic, haemorrhage and many other uses. In our study, Polygonum maritimum (from Algerian coast) was extracted with 80% methanol to obtain a crude extract. P. maritimum extract (PME) had a very high content of total phenol, which was 352.49 ± 18.03 mg/g dry weight, expressed as gallic acid equivalent. PME exhibited excellent antioxidant activity, as measured using DPPH and H2O2 scavenging assays. It also showed a high antibacterial activity against gram positive bacterial strains: Bacillus cereus, Bacillus subtilis and Staphylococcus aureus with an MIC 0,12 mg/mL.Keywords: Polygonum maritimum, crude extract, antioxidant activity, antibacterial activity
Procedia PDF Downloads 311281 Market Chain Analysis of Onion: The Case of Northern Ethiopia
Authors: Belayneh Yohannes
Abstract:
In Ethiopia, onion production is increasing from time to time mainly due to its high profitability per unit area. Onion has a significant contribution to generating cash income for farmers in the Raya Azebo district. Therefore, enhancing onion producers’ access to the market and improving market linkage is an essential issue. Hence, this study aimed to analyze structure-conduct-performance of onion market and identifying factors affecting the market supply of onion producers. Data were collected from both primary and secondary sources. Primary data were collected from 150 farm households and 20 traders. Four onion marketing channels were identified in the study area. The highest total gross margin is 27.6 in channel IV. The highest gross marketing margin of producers of the onion market is 88% in channel II. The result from the analysis of market concentration indicated that the onion market is characterized by a strong oligopolistic market structure, with the buyers’ concentration ratio of 88.7 in Maichew town and 82.7 in Mekelle town. Lack of capital, licensing problems, and seasonal supply was identified as the major entry barrier to onion marketing. Market conduct shows that the price of onion is set by traders while producers are price takers. Multiple linear regression model results indicated that family size in adult equivalent, irrigated land size, access to information, frequency of extension contact, and ownership of transport significantly determined the quantity of onion supplied to the market. It is recommended that strengthening and diversifying extension services in information, marketing, post-harvest handling, irrigation application, and water harvest technology is highly important.Keywords: oligopoly, onion, market chain, multiple linear regression
Procedia PDF Downloads 146280 Changing Subjective Well-Being and Social Trust in China: 2010-2020
Authors: Mengdie Ruan
Abstract:
The authors investigate how subjective well-being (SWB) and social trust changed in China over the period 2010–2020 by relying on data from six rounds of the China Family Panel Studies (CFPS), then re-examine Easterlin’s hypothesis for China, with a more focus on the role of social trust and estimate income-compensating differentials for social trust. They find that the evolution of well-being is not sensitive to the measures of well-being one uses. Specifically, self-reported life satisfaction scores and hedonic happiness scores experienced a significant increase across all income groups from 2010 to 2020. Social trust seems to have increased based on CFPS in China for all socioeconomic classes in recent years, and male, urban resident individuals with higher income have a higher social trust at a given point in time and over time. However, when we use an alternative measure of social trust, out-group trust, which is a more valid measure of generalized trust and represents “most people”, social trust in China literally declines, and the level is extremely low. In addition, this paper also suggests that in the typical query on social trust, the term "most people" mostly denotes in-groups in China, which contrasts sharply with most Western countries where it predominantly connotes out-groups. Individual fixed effects analysis of well-being that controls for time-invariant variables reveals social trust and relative social status are important correlates of life satisfaction and happiness, whereas absolute income plays a limited role in boosting an individual’s well-being. The income-equivalent value for social capital is approximately tripling of income. It has been found that women, urban and coastal residents, and people with higher income, young people, those with high education care more about social trust in China, irrespective of measures on SWB. Policy aiming at preserving and enhancing SWB should focus on social capital besides economic growth.Keywords: subjective well-being, life satisfaction, happiness, social trust, China
Procedia PDF Downloads 77279 Partitioning of Non-Metallic Nutrients in Lactating Crossbred Cattle Fed Buffers
Authors: Awadhesh Kishore
Abstract:
The goal of the study was to determine how different non-metallic nutrients are partitioned from feed in various physiological contexts and how buffer addition in ruminant nutrition affects these processes. Six lactating crossbred dairy cows were selected and divided into three groups on the basis of their phenotypic and productive features (374±14 kg LW). Two treatments, T1 and T2, were randomly assigned to one animal from each group. Animals under T1 and T2 were moved to T2 and T1, respectively, after 30 days. T2 was the only group to receive buffers containing magnesium oxide and sodium bicarbonate at 0.0 and 0.01% of LW (the real amounts are equivalent to 75.3±4.0 and 30 7.7±2.0 g/d, respectively). T1 was used as the control. Wheat straw and berseem were part of the base diet, whereas wheat grain and mustard cake were part of the concentrate mixture. Following a 21-day feeding period, metabolic and milk production trials were carried out for seven consecutive days. The Kearl equation used the urine's calorific value to determine its volume. Chemical analyses were performed to determine the levels of nitrogen, carbohydrates, calories, and phosphorus in samples of feed, waste, buffer, mineral mixture, water, feces, urine, and milk that were collected. The information was analyzed statistically. Notable results included decreased nitrogen and carbohydrate partitioning to feces from feed, while increased calorie partitioning to milk and body storage, and increased carbohydrate partitioning to body storage. Phosphorus balance was significantly better in T2. The application of buffers in ruminant diets was found to increase the output of calories in milk, as well as the number of calories and carbohydrates stored in the body, while decreasing the amount of nitrogen in faeces. As a result, it may be advised to introduce buffers to feed crossbred dairy cattle.Keywords: cattle, Magnesium oxide, non-metallic nutrients, partitioning, Sodium bicarbonate
Procedia PDF Downloads 58278 Theoretical Discussion on the Classification of Risks in Supply Chain Management
Authors: Liane Marcia Freitas Silva, Fernando Augusto Silva Marins, Maria Silene Alexandre Leite
Abstract:
The adoption of a network structure, like in the supply chains, favors the increase of dependence between companies and, by consequence, their vulnerability. Environment disasters, sociopolitical and economical events, and the dynamics of supply chains elevate the uncertainty of their operation, favoring the occurrence of events that can generate break up in the operations and other undesired consequences. Thus, supply chains are exposed to various risks that can influence the profitability of companies involved, and there are several previous studies that have proposed risk classification models in order to categorize the risks and to manage them. The objective of this paper is to analyze and discuss thirty of these risk classification models by means a theoretical survey. The research method adopted for analyzing and discussion includes three phases: The identification of the types of risks proposed in each one of the thirty models, the grouping of them considering equivalent concepts associated to their definitions, and, the analysis of these risks groups, evaluating their similarities and differences. After these analyses, it was possible to conclude that, in fact, there is more than thirty risks types identified in the literature of Supply Chains, but some of them are identical despite of be used distinct terms to characterize them, because different criteria for risk classification are adopted by researchers. In short, it is observed that some types of risks are identified as risk source for supply chains, such as, demand risk, environmental risk and safety risk. On the other hand, other types of risks are identified by the consequences that they can generate for the supply chains, such as, the reputation risk, the asset depreciation risk and the competitive risk. These results are consequence of the disagreements between researchers on risk classification, mainly about what is risk event and about what is the consequence of risk occurrence. An additional study is in developing in order to clarify how the risks can be generated, and which are the characteristics of the components in a Supply Chain that leads to occurrence of risk.Keywords: sisks classification, survey, supply chain management, theoretical discussion
Procedia PDF Downloads 633277 Biomass and Carbon Stock Estimates of Woodlands in the Southeastern Escarpment of Ethiopian Rift Valley: An Implication for Climate Change Mitigation
Authors: Sultan Haji Shube
Abstract:
Woodland ecosystems of semiarid rift valley of Ethiopia play a significant role in climate change mitigation by sequestering and storing more carbon. This study was conducted in Gidabo river sub-basins southeastern rift-valley escarpment of Ethiopian. It aims to estimate biomass and carbon stocks of woodlands and its implications for climate change mitigation. A total of 44 sampling plots (900m²each) were systematically laid in the woodland for vegetation and environmental data collection. A composite soil sample was taken from five locations main plot. Both disturbed and undisturbed soil samples were taken at two depths using soil auger and core-ring sampler, respectively. Allometric equation was used to estimate aboveground biomass while root-to-shoot ratio method and Walkley-Black method were used for belowground biomass and SOC, respectively. Result revealed that the totals of the study site was 17.05t/ha, of which 14.21t/ha was belonging for AGB and 2.84t/ha was for BGB. Moreover, 2224.7t/ha total carbon stocks was accumulated with an equivalent carbon dioxide of 8164.65t/ha. This study also revealed that more carbon was accumulated in the soil than the biomass. Both aboveground and belowground carbon stocks were decreased with increase in altitude while SOC stocks were increased. The AGC and BGC stocks were higher in the lower slope classes. SOC stocks were higher in the higher slope classes than in the lower slopes. Higher carbon stock was obtained from woody plants that had a DBH measure of >16cm and situated at plots facing northwest. Overall, study results will add up information about carbon stock potential of the woodland that will serve as a base line scenario for further research, policy makers and land managers.Keywords: allometric equation, climate change mitigation, soil organic carbon, woodland
Procedia PDF Downloads 82276 Analysis of Dynamics Underlying the Observation Time Series by Using a Singular Spectrum Approach
Authors: O. Delage, H. Bencherif, T. Portafaix, A. Bourdier
Abstract:
The main purpose of time series analysis is to learn about the dynamics behind some time ordered measurement data. Two approaches are used in the literature to get a better knowledge of the dynamics contained in observation data sequences. The first of these approaches concerns time series decomposition, which is an important analysis step allowing patterns and behaviors to be extracted as components providing insight into the mechanisms producing the time series. As in many cases, time series are short, noisy, and non-stationary. To provide components which are physically meaningful, methods such as Empirical Mode Decomposition (EMD), Empirical Wavelet Transform (EWT) or, more recently, Empirical Adaptive Wavelet Decomposition (EAWD) have been proposed. The second approach is to reconstruct the dynamics underlying the time series as a trajectory in state space by mapping a time series into a set of Rᵐ lag vectors by using the method of delays (MOD). Takens has proved that the trajectory obtained with the MOD technic is equivalent to the trajectory representing the dynamics behind the original time series. This work introduces the singular spectrum decomposition (SSD), which is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components. This method takes its origin from singular spectrum analysis (SSA), a nonparametric spectral estimation method used for the analysis and prediction of time series. As the first step of SSD is to constitute a trajectory matrix by embedding a one-dimensional time series into a set of lagged vectors, SSD can also be seen as a reconstruction method like MOD. We will first give a brief overview of the existing decomposition methods (EMD-EWT-EAWD). The SSD method will then be described in detail and applied to experimental time series of observations resulting from total columns of ozone measurements. The results obtained will be compared with those provided by the previously mentioned decomposition methods. We will also compare the reconstruction qualities of the observed dynamics obtained from the SSD and MOD methods.Keywords: time series analysis, adaptive time series decomposition, wavelet, phase space reconstruction, singular spectrum analysis
Procedia PDF Downloads 104275 Gc-ms Data Integrated Chemometrics for the Authentication of Vegetable Oil Brands in Minna, Niger State, Nigeria
Authors: Rasaq Bolakale Salau, Maimuna Muhammad Abubakar, Jonathan Yisa, Muhammad Tauheed Bisiriyu, Jimoh Oladejo Tijani, Alexander Ifeanyi Ajai
Abstract:
Vegetables oils are widely consumed in Nigeria. This has led to competitive manufacture of various oil brands. This leads increasing tendencies for fraud, labelling misinformation and other unwholesome practices. A total of thirty samples including raw and corresponding branded samples of vegetable oils were collected. The Oils were extracted from raw ground nut, soya bean and oil palm fruits. The GC-MS data was subjected to chemometric techniques of PCA and HCA. The SOLO 8.7 version of the standalone chemometrics software developed by Eigenvector research incorporated and powered by PLS Toolbox was used. The GCMS fingerprint gave basis for discrimination as it reveals four predominant but unevenly distributed fatty acids: Hexadecanoic acid methyl ester (10.27- 45.21% PA), 9,12-octadecadienoic acid methyl ester (10.9 - 45.94% PA), 9-octadecenoic acid methyl ester (18.75 - 45.65%PA), and Eicosanoic acid methyl ester (1.19% - 6.29%PA). In PCA modelling, two PCs are retained at cumulative variance captured at 73.15%. The score plots indicated that palm oil brands are most aligned with raw palm oil. PCA loading plot reveals the signature retention times between 4.0 and 6.0 needed for quality assurance and authentication of the oils samples. They are of aromatic hydrocarbons, alcohols and aldehydes functional groups. HCA dendrogram which was modeled using Euclidian distance through Wards method, indicated co-equivalent samples. HCA revealed the pair of raw palm oil brand and palm oil brand in the closest neighbourhood (± 1.62 % A difference) based on variance weighted distance. It showed Palm olein brand to be most authentic. In conclusion, based on the GCMS data with chemometrics, the authenticity of the branded samples is ranked as: Palm oil > Soya oil > groundnut oil.Keywords: vegetable oil, authenticity, chemometrics, PCA, HCA, GC-MS
Procedia PDF Downloads 31274 Screening of Factors Affecting the Enzymatic Hydrolysis of Empty Fruit Bunches in Aqueous Ionic Liquid and Locally Produced Cellulase System
Authors: Md. Z. Alam, Amal A. Elgharbawy, Muhammad Moniruzzaman, Nassereldeen A. Kabbashi, Parveen Jamal
Abstract:
The enzymatic hydrolysis of lignocellulosic biomass is one of the obstacles in the process of sugar production, due to the presence of lignin that protects the cellulose molecules against cellulases. Although the pretreatment of lignocellulose in ionic liquid (IL) system has been receiving a lot of interest; however, it requires IL removal with an anti-solvent in order to proceed with the enzymatic hydrolysis. At this point, introducing a compatible cellulase enzyme seems more efficient in this process. A cellulase enzyme that was produced by Trichoderma reesei on palm kernel cake (PKC) exhibited a promising stability in several ILs. The enzyme called PKC-Cel was tested for its optimum pH and temperature as well as its molecular weight. One among evaluated ILs, 1,3-diethylimidazolium dimethyl phosphate [DEMIM] DMP was applied in this study. Evaluation of six factors was executed in Stat-Ease Design Expert V.9, definitive screening design, which are IL/ buffer ratio, temperature, hydrolysis retention time, biomass loading, cellulase loading and empty fruit bunches (EFB) particle size. According to the obtained data, IL-enzyme system shows the highest sugar concentration at 70 °C, 27 hours, 10% IL-buffer, 35% biomass loading, 60 Units/g cellulase and 200 μm particle size. As concluded from the obtained data, not only the PKC-Cel was stable in the presence of the IL, also it was actually stable at a higher temperature than its optimum one. The reducing sugar obtained was 53.468±4.58 g/L which was equivalent to 0.3055 g reducing sugar/g EFB. This approach opens an insight for more studies in order to understand the actual effect of ILs on cellulases and their interactions in the aqueous system. It could also benefit in an efficient production of bioethanol from lignocellulosic biomass.Keywords: cellulase, hydrolysis, lignocellulose, pretreatment
Procedia PDF Downloads 365273 Impact of Emerging Nano-Agrichemicals on the Simultaneous Control of Arsenic and Cadmium in Rice Paddies
Authors: Xingmao Ma, Wenjie Sun
Abstract:
Rice paddies are frequently co-contaminated by arsenic (As) and cadmium (Cd), both of which demonstrate a high propensity for accumulation in rice grains and cause global food safety and public health concern. Even though different agricultural management strategies have been explored for their simultaneous control in rice grains, a viable solution is yet to be developed. Interestingly, several nanoagrichemicals, such as the zinc nanofertilizer and copper nanopesticide have displayed strong potential to reduce As or Cd accumulation in rice tissues. In order to determine whether these nanoagrichemicals can lower the accumulation of both As and Cd in rice, a series of bench studies were performed. Our results show that zinc oxide nanoparticles at 100 mg/Kg significantly lowered both As, and Cd in rice roots and shoots in flood irrigated rice seedlings, while equivalent amount of zinc ions only reduced As concentration in rice shoots. Zinc ions significantly increased Cd concentration in rice shoots by almost 30%. The results demonstrate a unique 'nano-effect' of zinc oxide nanoparticles, which is ascribed to the slow releasing of zinc ions from nanoparticles and the formation of different transformation products in these two treatments. We also evaluated the effect of nanoscale soil amendment, silicon oxide nanoparticles (SiO₂NPs) on the simultaneous reduction in both flooding and alternate wet and dry irrigation scheme. The effect of SiO₂NPs on As and Cd accumulation in rice tissues was strongly affected by the irrigation scheme. While 2000 mg/kg of SiO₂NPs significantly reduced As in rice roots and insignificantly reduced As in rice shoots in flooded rice, it increased As concentration in rice shoots in alternate wet and dry irrigation. In both irrigation scenarios, SiO₂NPs significantly reduced Cd concentration in rice roots, but only reduced Cd concentration in rice shoots in alternate wet and dry irrigation. Our results demonstrate a marked effect of nanoagrichemicals on the accumulation of As and Cd in rice and can be a potential solution to simultaneously control both in certain conditions.Keywords: arsenic, cadmium, rice, nanoagrichemicals
Procedia PDF Downloads 158272 Evaluation of the Shelf Life of Horsetail Stems Stored in Ecological Packaging
Authors: Rosana Goncalves Das Dores, Maira Fonseca, Fernando Finger, Vicente Casali
Abstract:
Equisetum hyemale L. (horsetail, Equisetaceae) is a medicinal plant used and commercialized in simple paper bags or non-ecological packaging in Brazil. The aim of this work was to evaluate the relation between the bioactive compounds of horsetail stems stored in ecological packages (multi-ply paper sacks) at room temperature. Stems in primary and secondary stage were harvested from an organic estate, on December 2016, selected, measured (length from the soil to the apex (cm), stem diameter at ground level (DGL mm) and breast height (DBH mm) and cut into 10 cm. For the post-harvest evaluations, stems were stored in multi-ply paper sacks and evaluated daily to the respiratory rate, fresh weight loss, pH, presence of fungi / mold, phenolic compounds and antioxidant activity. The analyses were done with four replicates, over time (regression) and compared at 1% significance (Tukey test). The measured heights were 103.7 cm and 143.5 cm, DGL was 2.5mm and 8.4 mm and DBH of 2.59 and 6.15 mm, respectively for primary and secondary stems stage. At both stages of development, in storage in multi-ply paper sacks, the greatest mass loss occurred at 48 h, decaying up to 120 hours, stabilizing at 192 hours. The peak respiratory rate increase occurred in 24 hours, coinciding with a change in pH (temperature and mean humidity was 23.5°C and 55%). No fungi or mold were detected, however, there was loss of color of the stems. The average yields of ethanolic extracts were equivalent (approximately 30%). Phenolic compounds and antioxidant activity were higher in secondary stems stage in up to 120 hours (AATt0 = 20%, AATt30 = 45%), decreasing at the end of the experiment (240 hours). The packaging used allows the commercialization of fresh stems of Equisetum for up to five days.Keywords: paper sacks, phenolic content, antioxidant activity, medicinal plants, post-harvest, ecological packages, Equisetum
Procedia PDF Downloads 166271 The Current Practices of Analysis of Reinforced Concrete Panels Subjected to Blast Loading
Authors: Palak J. Shukla, Atul K. Desai, Chentankumar D. Modhera
Abstract:
For any country in the world, it has become a priority to protect the critical infrastructure from looming risks of terrorism. In any infrastructure system, the structural elements like lower floors, exterior columns, walls etc. are key elements which are the most susceptible to damage due to blast load. The present study revisits the state of art review of the design and analysis of reinforced concrete panels subjected to blast loading. Various aspects in association with blast loading on structure, i.e. estimation of blast load, experimental works carried out previously, the numerical simulation tools, various material models, etc. are considered for exploring the current practices adopted worldwide. Discussion on various parametric studies to investigate the effect of reinforcement ratios, thickness of slab, different charge weight and standoff distance is also made. It was observed that for the simulation of blast load, CONWEP blast function or equivalent numerical equations were successfully employed by many researchers. The study of literature indicates that the researches were carried out using experimental works and numerical simulation using well known generalized finite element methods, i.e. LS-DYNA, ABAQUS, AUTODYN. Many researchers recommended to use concrete damage model to represent concrete and plastic kinematic material model to represent steel under action of blast loads for most of the numerical simulations. Most of the studies reveal that the increase reinforcement ratio, thickness of slab, standoff distance was resulted in better blast resistance performance of reinforced concrete panel. The study summarizes the various research results and appends the present state of knowledge for the structures exposed to blast loading.Keywords: blast phenomenon, experimental methods, material models, numerical methods
Procedia PDF Downloads 157270 Pellet Feed Improvements through Vitamin C Supplementation for Snakehead (Channa striata) Culture in Vietnam
Authors: Pham Minh Duc, Tran Thi Thanh Hien, David A. Bengtson
Abstract:
Laboratory feeding trial: the study was conducted to find out the optimal dietary vitamin C, or ascorbic acid (AA) levels in terms of the growth performance of snakehead. The growth trial included six treatments with five replications. Each treatment contained 0, 125, 250, 500, 1000 and 2000 mg AA equivalent kg⁻¹ diet which included six iso-nitrogenous (45% protein), iso-lipid (9% lipid) and isocaloric (4.2 Kcal.g¹). Eighty snakehead fingerlings (6.24 ± 0.17 g.fish¹) were assigned randomly in 0.5 m³ composite tanks. Fish were fed twice daily on demand for 8 weeks. The result showed that growth rates increased, protein efficiency ratio increased and the feed conversion ratio decreased in treatments with AA supplementation compared with control treatment. The survival rate of fish tends to increase with increase AA level. The number of RBCs, lysozyme in treatments with AA supplementation tended to rise significantly proportional to the concentration of AA. The number of WBCs of snakehead in treatments with AA supplementation was higher 2.1-3.6 times. In general, supplementation of AA in the diets for snakehead improved growth rate, feed efficiency and immune response. Hapa on-farm trial: based on the results of the laboratory feeding trial, the effects of AA on snakehead in hapas to simulate farm conditions, was tested using the following treatments: commercial feed; commercial feed plus hand mixed AA at 500; 750 and 1000 mg AA.kg⁻¹; SBM diet without AA; SBM diet plus 500; 750 and 1000 mg AA.kg⁻¹. The experiment was conducted in two experimental ponds (only SBM diet without AA placed in one pond and the rest in the other pond) with four replicate hapa each. Stocking density was 150 fish.m² and culture period was 5 months until market size was attained. The growth performance of snakehead and economic aspects were examined in this research.Keywords: fish health, growth rate, snakehead, Vitamin C
Procedia PDF Downloads 104269 Relationship of Indoor and Outdoor Levels of Black Carbon in an Urban Environment
Authors: Daria Pashneva, Julija Pauraite, Agne Minderyte, Vadimas Dudoitis, Lina Davuliene, Kristina Plauskaite, Inga Garbariene, Steigvile Bycenkiene
Abstract:
Black carbon (BC) has received particular attention around the world, not only for its impact on regional and global climate change but also for its impact on air quality and public health. In order to study the relationship between indoor and outdoor BC concentrations, studies were carried out in Vilnius, Lithuania. The studies are aimed at determining the relationship of concentrations, identifying dependencies during the day and week with a further opportunity to analyze the key factors affecting the indoor concentration of BC. In this context, indoor and outdoor continuous real-time measurements of optical BC-related light absorption by aerosol particles were carried out during the cold season (from October to December 2020). The measurement venue was an office located in an urban background environment. Equivalent black carbon (eBC) mass concentration was measured by an Aethalometer (Magee Scientific, model AE-31). The optical transmission of carbonaceous aerosol particles was measured sequentially at seven wavelengths (λ= 370, 470, 520, 590, 660, 880, and 950 nm), where the eBC mass concentration was derived from the light absorption coefficient (σab) at 880 nm wavelength. The diurnal indoor eBC mass concentration was found to vary in the range from 0.02 to 0.08 µgm⁻³, while the outdoor eBC mass concentration - from 0.34 to 0.99 µgm⁻³. Diurnal variations of eBC mass concentration outdoor vs. indoor showed an increased contribution during 10:00 and 12:00 AM (GMT+2), with the highest indoor eBC mass concentration of 0.14µgm⁻³. An indoor/outdoor eBC ratio (I/O) was below one throughout the entire measurement period. The weekend levels of eBC mass concentration were lower than in weekdays for indoor and outdoor for 33% and 28% respectively. Hourly mean mass concentrations of eBC for weekdays and weekends show diurnal cycles, which could be explained by the periodicity of traffic intensity and heating activities. The results show a moderate influence of outdoor eBC emissions on the indoor eBC level.Keywords: black carbon, climate change, indoor air quality, I/O ratio
Procedia PDF Downloads 199268 Experimenting the Influence of Input Modality on Involvement Load Hypothesis
Authors: Mohammad Hassanzadeh
Abstract:
As far as incidental vocabulary learning is concerned, the basic contention of the Involvement Load Hypothesis (ILH) is that retention of unfamiliar words is, generally, conditional upon the degree of involvement in processing them. This study examined input modality and incidental vocabulary uptake in a task-induced setting whereby three variously loaded task types (marginal glosses, fill-in-task, and sentence-writing) were alternately assigned to one group of students at Allameh Tabataba’i University (n=2l) during six classroom sessions. While one round of exposure was comprised of the audiovisual medium (TV talk shows), the second round consisted of textual materials with approximately similar subject matter (reading texts). In both conditions, however, the tasks were equivalent to one another. Taken together, the study pursued the dual objectives of establishing a litmus test for the ILH and its proposed values of ‘need’, ‘search’ and ‘evaluation’ in the first place. Secondly, it sought to bring to light the superiority issue of exposure to audiovisual input versus the written input as far as the incorporation of tasks is concerned. At the end of each treatment session, a vocabulary active recall test was administered to measure their incidental gains. Running a one-way analysis of variance revealed that the audiovisual intervention yielded higher gains than the written version even when differing tasks were included. Meanwhile, task 'three' (sentence-writing) turned out the most efficient in tapping learners' active recall of the target vocabulary items. In addition to shedding light on the superiority of audiovisual input over the written input when circumstances are relatively held constant, this study for the most part, did support the underlying tenets of ILH.Keywords: Keywords— Evaluation, incidental vocabulary learning, input mode, Involvement Load Hypothesis, need, search.
Procedia PDF Downloads 279267 Coordination of Traffic Signals on Arterial Streets in Duhok City
Authors: Dilshad Ali Mohammed, Ziyad Nayef Shamsulddin Aldoski, Millet Salim Mohammed
Abstract:
The increase in levels of traffic congestion along urban signalized arterials needs efficient traffic management. The application of traffic signal coordination can improve the traffic operation and safety for a series of signalized intersection along the arterials. The objective of this study is to evaluate the benefits achievable through actuated traffic signal coordination and make a comparison in control delay against the same signalized intersection in case of being isolated. To accomplish this purpose, a series of eight signalized intersections located on two major arterials in Duhok City was chosen for conducting the study. Traffic data (traffic volumes, link and approach speeds, and passenger car equivalent) were collected at peak hours. Various methods had been used for collecting data such as video recording technique, moving vehicle method and manual methods. Geometric and signalization data were also collected for the purpose of the study. The coupling index had been calculated to check the coordination attainability, and then time space diagrams were constructed representing one-way coordination for the intersections on Barzani and Zakho Streets, and others represented two-way coordination for the intersections on Zakho Street with accepted progression bandwidth efficiency. The results of this study show great progression bandwidth of 54 seconds for east direction coordination and 17 seconds for west direction coordination on Barzani Street under suggested controlled speed of 60 kph agreeable with the present data. For Zakho Street, the progression bandwidth is 19 seconds for east direction coordination and 18 seconds for west direction coordination under suggested controlled speed of 40 kph. The results show that traffic signal coordination had led to high reduction in intersection control delays on both arterials.Keywords: bandwidth, congestion, coordination, traffic, signals, streets
Procedia PDF Downloads 307266 Investigating the Potential Use of Unsaturated Fatty Acids as Antifungal Crop Protective Agents
Authors: Azadeh Yasari, Michael Ganzle, Stephen Strelkov, Nuanyi Liang, Jonathan Curtis, Nat N. V. Kav
Abstract:
Pathogenic fungi cause significant yield losses and quality reductions to major crops including wheat, canola, and barley. Toxic metabolites produced by phytopathogenic fungi also pose significant risks to animal and human health. Extensive application of synthetic fungicides is not a sustainable solution since it poses risks to human, animal and environmental health. Unsaturated fatty acids may provide an environmentally friendly alternative because of their direct antifungal activity against phytopathogens as well as through the stimulation of plant defense pathways. The present study assessed the in vitro and in vivo efficacy of two hydroxy fatty acids, coriolic acid and ricinoleic acid, against the phytopathogens Fusarium graminearum, Pyrenophora tritici-repentis, Pyrenophora teres f. teres, Sclerotinia sclerotiorum, and Leptosphaeria maculans. Antifungal activity of coriolic acid and ricinoleic acid was evaluated using broth micro-dilution method to determine the minimum inhibitory concentration (MIC). Results indicated that both ricinoleic acid and coriolic acid showed antifungal activity against phytopathogens, with the strongest inhibitory activity against L. maculans, but the MIC varied greatly between species. An antifungal effect was observed for coriolic acid in vivo against pathogenic fungi of wheat and barley. This effect was not correlated to the in vitro activity because ricinoleic acid with equivalent in vitro antifungal activity showed no protective effect in vivo. Moreover, neither coriolic acid nor ricinoleic acid controlled fungal pathogens of canola. In conclusion, coriolic acid inhibits some phytopathogens in vivo and may have the potential to be an effective crop protection agent.Keywords: coriolic acid, minimum inhibitory concentration, pathogenic fungi, ricinoleic acid
Procedia PDF Downloads 178265 When Conducting an Analysis of Workplace Incidents, It Is Imperative to Meticulously Calculate Both the Frequency and Severity of Injuries Sustain
Authors: Arash Yousefi
Abstract:
Experts suggest that relying exclusively on parameters to convey a situation or establish a condition may not be adequate. Assessing and appraising incidents in a system based on accident parameters, such as accident frequency, lost workdays, or fatalities, may not always be precise and occasionally erroneous. The frequency rate of accidents is a metric that assesses the correlation between the number of accidents causing work-time loss due to injuries and the total working hours of personnel over a year. Traditionally, this has been calculated based on one million working hours, but the American Occupational Safety and Health Organization has updated its standards. The new coefficient of 200/000 working hours is now used to compute the frequency rate of accidents. It's crucial to ensure that the total working hours of employees are equally represented when calculating individual event and incident numbers. The accident severity rate is a metric used to determine the amount of time lost or wasted during a given period, often a year, in relation to the total number of working hours. It measures the percentage of work hours lost or wasted compared to the total number of useful working hours, which provides valuable insight into the number of days lost or wasted due to work-related incidents for each working hour. Calculating the severity of an incident can be difficult if a worker suffers permanent disability or death. To determine lost days, coefficients specified in the "tables of days equivalent to OSHA or ANSI standards" for disabling injuries are used. The accident frequency coefficient denotes the rate at which accidents occur, while the accident severity coefficient specifies the extent of damage and injury caused by these accidents. These coefficients are crucial in accurately assessing the magnitude and impact of accidents.Keywords: incidents, safety, analysis, frequency, severity, injuries, determine
Procedia PDF Downloads 91264 Investigation of Ductile Failure Mechanisms in SA508 Grade 3 Steel via X-Ray Computed Tomography and Fractography Analysis
Authors: Suleyman Karabal, Timothy L. Burnett, Egemen Avcu, Andrew H. Sherry, Philip J. Withers
Abstract:
SA508 Grade 3 steel is widely used in the construction of nuclear pressure vessels, where its fracture toughness plays a critical role in ensuring operational safety and reliability. Understanding the ductile failure mechanisms in this steel grade is crucial for designing robust pressure vessels that can withstand severe nuclear environment conditions. In the present study, round bar specimens of SA508 Grade 3 steel with four distinct notch geometries were subjected to tensile loading while capturing continuous 2D images at 5-second intervals in order to monitor any alterations in their geometries to construct true stress-strain curves of the specimens. 3D reconstructions of X-ray computed tomography (CT) images at high-resolution (a spatial resolution of 0.82 μm) allowed for a comprehensive assessment of the influences of second-phase particles (i.e., manganese sulfide inclusions and cementite particles) on ductile failure initiation as a function of applied plastic strain. Additionally, based on 2D and 3D images, plasticity modeling was executed, and the results were compared to experimental data. A specific ‘two-parameter criterion’ was established and calibrated based on the correlation between stress triaxiality and equivalent plastic strain at failure initiation. The proposed criterion demonstrated substantial agreement with the experimental results, thus enhancing our knowledge of ductile fracture behavior in this steel grade. The implementation of X-ray CT and fractography analysis provided new insights into the diverse roles played by different populations of second-phase particles in fracture initiation under varying stress triaxiality conditions.Keywords: ductile fracture, two-parameter criterion, x-ray computed tomography, stress triaxiality
Procedia PDF Downloads 92263 Efficacy of Corticosteroids versus Placebo in Third Molar Surgery: A Systematic Review of Patient-Reported Outcomes
Authors: Parastoo Parhizkar, Jaber Yaghini, Omid Fakheran
Abstract:
Background: Third molar surgery is often associated with postoperative problems which cause serious impediments on daily activities and quality of life. Steroidal anti-inflammatory drugs may decrease these common post-operative complications. The purpose of this review is evaluating the available evidence regarding the efficacy of corticosteroids used as adjunctive therapy for patients undergoing third molar surgery. Methods: PubMed, Google scholar, Scopus, web of science, clinicaltrials.gov, scirus.com, Cochrane central register for controlled trials, LILACS, OpenGrey, centerwatch, isrctn, who.int and ebsco were searched without restrictions regarding the year of publication. Randomized clinical trials assessing patient-reported outcomes in patients undergoing surgical therapy, were eligible for inclusion. Study quality was assessed using the CONSORT-checklist. No meta-analysis was performed. Results: A total of twelve Randomized Clinical Trials were included in this study. Methylprednisolone and Dexamethasone may decrease postoperative side effects such as pain, trismus and edema. Based on the results both of them could improve patients’ satisfaction, and there is no significant difference between these two types of corticosteroids regarding the patient centered outcomes (p > 0.05). Intralesional and intravenous injection of Dexamethasone showed an equivalent result, with statistically significant better results (P < 0.05) in comparison with the oral treatment. Conclusion: various types of corticosteroids can enhance the patient’s satisfaction following third molar surgery. However, there is no significant difference between Dexamethasone, Prednisolone and Methylprednisolone groups in this regard. Comparing the various administration routs, local injection of Dexamethasone is quite simple, painless and cost-effective adjunctive therapy with better drug efficacy.Keywords: third molar surgery, corticosteroids, patient-reported outcomes, health related quality of life
Procedia PDF Downloads 202262 The Effectiveness of Zinc Supplementation in Taste Disorder Treatment: A Systematic Review and Meta-Analysis of Randomized Controlled Trials
Authors: Boshra Mozaffar, Arash Ardavani, Iskandar Idris
Abstract:
Food taste and flavor affect food choice and acceptance, which are essential to maintain good health and quality of life. Reduced circulating zinc levels have been shown to adversely affect taste which can result in reduced appetite, weight loss and psychological problems, but the efficacy of Zinc supplementation to treat disorders of taste remains unclear. In this systematic review and meta-analysis, we aimed to examine the efficacy of zinc supplementation in the treatment of taste disorders. We searched four electronic bibliographical databases; Ovid MEDLINE, Ovid Embase, Ovid AMAD and PubMed. Article bibliographies were also searched, which yielded additional relevant studies. To facilitate the collection and identification of all available and relevant articles published before 7 December 2020, there were no restrictions on the publication date. We performed a systematic review and meta-analysis according to the PRISMA Statement. This review was registered at PROSPERO and given the identification number CRD42021228461. In total, we included 12 randomized controlled trials with 938 subjects. Intervention includes zinc (sulfate, gluconate, picolinate, polaprezinc and acetate); the pooled results of the meta-analysis indicate that improvements in taste disorder occurred more frequently in the intervention group compared to the control group (RR = 1.8; 95% CI:1.27 -2.57, p=0.009). The doses are equivalent to 17 mg- 86.7 mg of elemental zin for three to six months. Zinc supplementation is an effective treatment for taste disorders in patients with zinc deficiency or idiopathic taste disorders when given in high doses ranging from 68–86.7 mg/d for up to three months. However, we did not find sufficient evidence to determine the effectiveness of zinc supplementation in patients with taste disorders induced by chronic renal failure.Keywords: taste change, taste disorder, zinc, zinc sulfate or Zn, deficiency, supplementation.
Procedia PDF Downloads 263261 The Influence of Phosphate Fertilizers on Radiological Situation of Cultivated Lands: ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K and ¹³⁷Cs Concentrations in Soil
Authors: Grzegorz Szaciłowski, Marta Konop, Małgorzata Dymecka, Jakub Ośko
Abstract:
In 1996, the European Council Directive 96/29/EURATOM pointed phosphate fertilizers to have a potentially negative influence on the environment from the radiation protection point of view. Fertilizers along with irrigation and crop rotation were the milestones that allowed to increase agricultural productivity. Firstly based on natural materials such as compost, manure, fish processing waste, etc., and since the 19th century created synthetically, fertilizers caused a boom in crop yield and helped to propel global food production, especially after World War II. In this work the concentrations of ²¹⁰Po, ²²⁶Ra, ²³²Th, ⁴⁰K, and ¹³⁷Cs in selected fertilizers and soil samples were determined. The results were used to calculate the annual addition of natural radionuclides and increment of the external radiation exposure caused by the use of studied fertilizers. Soils intended for different types of crops were sampled in early spring when no vegetation had occurred yet. Analysed fertilizers were those with which the soil was previously fertilized. For gamma radionuclides, a high purity germanium detector GX3520 from Canberra was used. The polonium concentration was determined by radiochemical separation followed by measurement by means of alpha spectrometry. The spectrometer used in this study was equipped with 450 cm² PIPS detector from Canberra. Obtained results showed significant differences in radionuclide composition between phosphate and nitrogenous fertilizers (e.g. the radium equivalent activity for phosphate fertilizer was 207.7 Bq/kg in comparison to <5.6 Bq/kg for nitrogenous fertilizer). The calculated increase of external radiation exposure due to use of phosphate fertilizer ranged between 3.4 and 5.4 nG/h, which represents up to 10% of the polish average outdoor exposure due to terrestrial gamma radiation (45 nGy/h).Keywords: ²¹⁰Po, alpha spectrometry, exposure, gamma spectrometry, phosphate fertilizer, soil
Procedia PDF Downloads 300260 A Case Study on Performance of Isolated Bridges under Near-Fault Ground Motion
Authors: Daniele Losanno, H. A. Hadad, Giorgio Serino
Abstract:
This paper presents a numerical investigation on the seismic performance of a benchmark bridge with different optimal isolation systems under near fault ground motion. Usually, very large displacements make seismic isolation an unfeasible solution due to boundary conditions, especially in case of existing bridges or high risk seismic regions. Hence, near-fault ground motions are most likely to affect either structures with long natural period range like isolated structures or structures sensitive to velocity content such as viscously damped structures. The work is aimed at analyzing the seismic performance of a three-span continuous bridge designed with different isolation systems having different levels of damping. The case study was analyzed in different configurations including: (a) simply supported, (b) isolated with lead rubber bearings (LRBs), (c) isolated with rubber isolators and 10% classical damping (HDLRBs), and (d) isolated with rubber isolators and 70% supplemental damping ratio. Case (d) represents an alternative control strategy that combines the effect of seismic isolation with additional supplemental damping trying to take advantages from both solutions. The bridge is modeled in SAP2000 and solved by time history direct-integration analyses under a set of six recorded near-fault ground motions. In addition to this, a set of analysis under Italian code provided seismic action is also conducted, in order to evaluate the effectiveness of the suggested optimal control strategies under far field seismic action. Results of the analysis demonstrated that an isolated bridge equipped with HDLRBs and a total equivalent damping ratio of 70% represents a very effective design solution for both mitigation of displacement demand at the isolation level and base shear reduction in the piers also in case of near fault ground motion.Keywords: isolated bridges, near-fault motion, seismic response, supplemental damping, optimal design
Procedia PDF Downloads 286259 Exercise Intensity Increasing Appetite, Energy, Intake Energy Expenditure, and Fat Oxidation in Sedentary Overweight Individuals
Authors: Ghalia Shamlan, M. Denise Robertson, Adam Collins
Abstract:
Appetite control (i.e. control of energy intake) is important for weight maintenance. Exercise contributes to the most variable component of energy expenditure (EE) but its impact is beyond the energy cost of exercise including physiological, behavioural, and appetite effects. Exercise is known to acutely influence effect appetite but evidence as to the independent effect of intensity is lacking. This study investigated the role of exercise intensity on appetite, energy intake (EI), appetite related hormone, fat utilisation and subjective measures of appetite. One hour after a standardised breakfast, 10 sedentary overweight volunteers. Subjects undertook either 8 repeated 60 second bouts of cycling at 95% VO2max (high intensity) or 30 minutes of continuous cycling, at a fixed cadence, equivalent to 50% of the participant’s VO2max (low intensity) in a randomised crossover design. Glucose, NEFA, glucagon-like peptide-1 (GLP-1) were measured fasted, postprandial, and pre and post-exercise. Satiety was assessed subjectively throughout the study using visual analogue scales (VAS). Ad libitum intake of a pasta meal was measured at the end (3-h post-breakfast). Interestingly, there was not significant difference in EE fat oxidation between HI and LI post-exercise. Also, no significant effect of high intensity (HI) was observed on the ad libitum meal, 24h and 48h EI post-exercise. However the mean 24h EI was 3000 KJ lower following HI than low intensity (LI). Despite, no significant differences in hunger score, glucose, NEFA and GLP-1 between both intensities were observed. However, NEFA and GLP-1 plasma level were higher until 30 min post LI. In conclusion, the similarity of EE and oxidation outcomes could give overweight individuals an option to choose between intensities. However, HI could help to reduce EI. There are mechanisms and consequences of exercise in short and long-term appetite control; however, these mechanisms warrant further explanation. These results support the need for future research in to the role of in regulation energy balance, especially for obese people.Keywords: appetite, exercise, food intake, energy expenditure
Procedia PDF Downloads 506258 Exploring the Efficacy of Context-Based Instructional Strategy in Fostering Students Achievement in Chemistry
Authors: Charles U. Eze, Joy Johnbest Egbo
Abstract:
The study investigated the effect of Context-Based Instructional Strategy (CBIS) on students’ achievement in chemistry. CBIS was used as an experimental group and expository method (EM) as a control group, sources showed that students poor achievement in chemistry is from teaching strategy adopted by the chemistry teachers. Two research questions were answered, and two null hypotheses were formulated and tested. This strategy recognizes the need for student-centered, relevance of tasks and students’ voice; it also helps students develop creative and critical learning skills. A quasi-experimental (non-equivalent, pretest, posttest control group) design was adopted for the study. The population for the study comprised all senior secondary class one (SSI) students who were offering chemistry in co-education schools in Agbani Education zone. The instrument for data collection was a self-developed Basic Chemistry Achievement Test (BCAT). Relevant data were collected from a sample of SSI chemistry students using purposive random sampling techniques from two co-education schools in Agbani Education Zone of Enugu State, Nigeria. A reliability co-efficient was obtained for the instrument using Kuder-Richardson formula 20. Mean and standard deviation scores were used to answer the research questions while two-way analysis of covariance (ANCOVA) was used to test the hypotheses. The findings showed that the experimental group taught with context-based instructional strategy (CBIS) obtained a higher mean achievement score than the control group in the post BCAT; male students had higher mean achievement scores than their female counterparts. The difference was significant. It was recommended, among others, that CBIS should be given more emphasis in the training and re-training program of secondary school chemistry teachers.Keywords: context-based instructional strategy, expository strategy, student-centered
Procedia PDF Downloads 229257 Effect of Duration and Frequency on Ground Motion: Case Study of Guwahati City
Authors: Amar F. Siddique
Abstract:
The Guwahati city is one of the fastest growing cities of the north-eastern region of India, situated on the South Bank of the Brahmaputra River falls in the highest seismic zone level V. The city has witnessed many high magnitude earthquakes in the past decades. The Assam earthquake occurred on August 15, 1950, of moment magnitude 8.7 epicentered near Rima, Tibet was one of the major earthquakes which caused a serious structural damage and widespread soil liquefaction in and around the region. Hence the study of ground motion characteristics of Guwahati city is very essential. In this present work 1D equivalent linear ground response analysis (GRA) has been adopted using Deep soil software. The analysis has been done for two typical sites namely, Panbazar and Azara comprising total four boreholes location in Guwahati city of India. GRA of the sites is carried out by using an input motion recorded at Nongpoh station (recorded PGA 0.048g) and Nongstoin station (recorded PGA 0.047g) of 1997 Indo-Burma earthquake. In comparison to motion recorded at Nongpoh, different amplifications of bedrock peak ground acceleration (PGA) are obtained for all the boreholes by the motion recorded at Nongstoin station; although, the Fourier amplitude ratios (FAR) and fundamental frequencies remain almost same. The difference in recorded duration and frequency content of the two motions mainly influence the amplification of motions thus getting different surface PGA and amplification factor keeping a constant bedrock PGA. From the results of response spectra, it is found that at the period of less than 0.2 sec the ground motion recorded at Nongpoh station will give a high spectral acceleration (SA) on the structures than at Nongstoin station. Again for a period greater than 0.2 sec the ground motion recorded at Nongstoin station will give a high SA on the structures than at Nongpoh station.Keywords: fourier amplitude ratio, ground response analysis, peak ground acceleration, spectral acceleration
Procedia PDF Downloads 179256 Intrastromal Donor Limbal Segments Implantation as a Surgical Treatment of Progressive Keratoconus: Clinical and Functional Results
Authors: Mikhail Panes, Sergei Pozniak, Nikolai Pozniak
Abstract:
Purpose: To evaluate the effectiveness of intrastromal donor limbal segments implantation for treatment of progressive keratoconus considering on main characteristics of corneal endothelial cells. Setting: Outpatient ophthalmic clinic. Methods: Twenty patients (20 eyes) with progressive keratoconus II-III of Amsler classification were recruited. The worst eye was treated with the transplantation of donor limbal segments in the recipient corneal stroma, while the fellow eye was left untreated as a control of functional and morphological changes. Furthermore, twenty patients (20 eyes) without progressive keratoconus was used as a control of corneal endothelial cells changes. All patients underwent a complete ocular examination including uncorrected and corrected distance visual acuity (UDVA, CDVA), slit lamp examination fundus examination, corneal topography and pachymetry, auto-keratometry, Anterior Segment Optical Coherence Tomography and Corneal Endothelial Specular Microscopy. Results: After two years, statistically significant improvement in the UDVA and CDVA (on the average on two lines for UDVA and three-four lines for CDVA) were noted. Besides corneal astigmatism decreased from 5.82 ± 2.64 to 1.92 ± 1.4 D. Moreover there were no statistically significant differences in the changes of mean spherical equivalent, keratometry and pachymetry indicators. It should be noted that after two years there were no significant differences in the changes of the number and form of corneal endothelial cells. It can be regarded as a process stabilization. In untreated control eyes, there was a general trend towards worsening of UDVA, CDVA and corneal thickness, while corneal astigmatism was increased. Conclusion: Intrastromal donor segments implantation is a safe technique for keratoconus treatment. Intrastromal donor segments implantation is an efficient procedure to stabilize and improve progressive keratoconus.Keywords: corneal endothelial cells, intrastromal donor limbal segments, progressive keratoconus, surgical treatment of keratoconus
Procedia PDF Downloads 282255 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions
Authors: Valerii Dashuk
Abstract:
The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function
Procedia PDF Downloads 176254 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change
Authors: Ermias A. Tegegn, Million Meshesha
Abstract:
Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model
Procedia PDF Downloads 142