Search results for: chronic total occlusion procedures
581 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis
Authors: Liliia N. Butymova, Vladimir Ya Modorskii
Abstract:
To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.Keywords: aeroelasticity, labyrinth packings, oscillation phase shift, vibration
Procedia PDF Downloads 295580 A Fermatean Fuzzy MAIRCA Approach for Maintenance Strategy Selection of Process Plant Gearbox Using Sustainability Criteria
Authors: Soumava Boral, Sanjay K. Chaturvedi, Ian Howard, Kristoffer McKee, V. N. A. Naikan
Abstract:
Due to strict regulations from government to enhance the possibilities of sustainability practices in industries, and noting the advances in sustainable manufacturing practices, it is necessary that the associated processes are also sustainable. Maintenance of large scale and complex machines is a pivotal task to maintain the uninterrupted flow of manufacturing processes. Appropriate maintenance practices can prolong the lifetime of machines, and prevent associated breakdowns, which subsequently reduces different cost heads. Selection of the best maintenance strategies for such machines are considered as a burdensome task, as they require the consideration of multiple technical criteria, complex mathematical calculations, previous fault data, maintenance records, etc. In the era of the fourth industrial revolution, organizations are rapidly changing their way of business, and they are giving their utmost importance to sensor technologies, artificial intelligence, data analytics, automations, etc. In this work, the effectiveness of several maintenance strategies (e.g., preventive, failure-based, reliability centered, condition based, total productive maintenance, etc.) related to a large scale and complex gearbox, operating in a steel processing plant is evaluated in terms of economic, social, environmental and technical criteria. As it is not possible to obtain/describe some criteria by exact numerical values, these criteria are evaluated linguistically by cross-functional experts. Fuzzy sets are potential soft-computing technique, which has been useful to deal with linguistic data and to provide inferences in many complex situations. To prioritize different maintenance practices based on the identified sustainable criteria, multi-criteria decision making (MCDM) approaches can be considered as potential tools. Multi-Attributive Ideal Real Comparative Analysis (MAIRCA) is a recent addition in the MCDM family and has proven its superiority over some well-known MCDM approaches, like TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and ELECTRE (ELimination Et Choix Traduisant la REalité). It has a simple but robust mathematical approach, which is easy to comprehend. On the other side, due to some inherent drawbacks of Intuitionistic Fuzzy Sets (IFS) and Pythagorean Fuzzy Sets (PFS), recently, the use of Fermatean Fuzzy Sets (FFSs) has been proposed. In this work, we propose the novel concept of FF-MAIRCA. We obtain the weights of the criteria by experts’ evaluation and use them to prioritize the different maintenance practices according to their suitability by FF-MAIRCA approach. Finally, a sensitivity analysis is carried out to highlight the robustness of the approach.Keywords: Fermatean fuzzy sets, Fermatean fuzzy MAIRCA, maintenance strategy selection, sustainable manufacturing, MCDM
Procedia PDF Downloads 137579 Production of Bacillus Lipopeptides for Biocontrol of Postharvest Crops
Authors: Vivek Rangarajan, Kim G. Klarke
Abstract:
With overpopulation threatening the world’s ability to feed itself, food production and protection has become a major issue, especially in developing countries. Almost one-third of the food produced for human consumption, around 1.3 billion tonnes, is either wasted or lost annually. Postharvest decay in particular constitutes a major cause of crop loss with about 20% of fruits and vegetables produced lost during postharvest storage, mainly due to fungal disease. Some of the major phytopathogenic fungi affecting postharvest fruit crops in South Africa include Aspergillus, Botrytis, Penicillium, Alternaria and Sclerotinia spp. To date control of fungal phytopathogens has primarily been dependent on synthetic chemical fungicides, but these chemicals pose a significant threat to the environment, mainly due to their xenobiotic properties and tendency to generate resistance in the phytopathogens. Here, an environmentally benign alternative approach to control postharvest fungal phytopathogens in perishable fruit crops has been presented, namely the application of a bio-fungicide in the form of lipopeptide molecules. Lipopeptides are biosurfactants produced by Bacillus spp. which have been established as green, nontoxic and biodegradable molecules with antimicrobial properties. However, since the Bacillus are capable of producing a large number of lipopeptide homologues with differing efficacies against distinct target organisms, the lipopeptide production conditions and strategy are critical to produce the maximum lipopeptide concentration with homologue ratios to specification for optimum bio-fungicide efficacy. Process conditions, and their impact on Bacillus lipopeptide production, were evaluated in fully instrumented laboratory scale bioreactors under well-regulated controlled and defined environments. Factors such as the oxygen availability and trace element and nitrate concentrations had profound influences on lipopeptide yield, productivity and selectivity. Lipopeptide yield and homologue selectivity were enhanced in cultures where the oxygen in the sparge gas was increased from 21 to 30 mole%. The addition of trace elements, particularly Fe2+, increased the total concentration of lipopeptides and a nitrate concentration equivalent to 8 g/L ammonium nitrate resulted in optimum lipopeptide yield and homologue selectivity. Efficacy studies of the culture supernatant containing the crude lipopeptide mixture were conducted using phytopathogens isolated from fruit in the field, identified using genetic sequencing. The supernatant exhibited antifungal activity against all the test-isolates, namely Lewia, Botrytis, Penicillium, Alternaria and Sclerotinia spp., even in this crude form. Thus the lipopeptide product efficacy has been confirmed to control the main diseases, even in the basic crude form. Future studies will be directed towards purification of the lipopeptide product and enhancement of efficacy.Keywords: antifungal efficacy, biocontrol, lipopeptide production, perishable crops
Procedia PDF Downloads 403578 Wood as a Climate Buffer in a Supermarket
Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø
Abstract:
Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast
Procedia PDF Downloads 213577 Glycemic Control in Rice Consumption among Households with Diabetes Patients: The Role of Food Security
Authors: Chandanee Wasana Kalansooriya
Abstract:
Dietary behaviour is a crucial factor affecting diabetes control. With increasing rates of diabetes prevalence in Asian countries, examining their dietary patterns, which are largely based on rice, is timely required. It has been identified that higher consumption of some rice varieties is associated with increased risk of type 2 diabetes. Although diabetes patients are advised to consume healthier rice varieties, which contains low glycemic, several conditions, one of which food insecurity, make them difficult to preserve those healthy dietary guidelines. Hence this study tries to investigate how food security affects on making right decisions of rice consumption within diabetes affected households using a sample from Sri Lanka, a country which rice considered as the staple food and records the highest diabetes prevalence rate in South Asia. The study uses data from the Household Income and Expenditure Survey 2016, a nationally representative sample conducted by the Department of Census and Statistics, Sri Lanka. The survey used a two-stage stratified sampling method to cover different sectors and districts of the country and collected micro-data on demographics, health, income and expenditures of different categories. The study uses data from 2547 households which consist of one or more diabetes patients, based on the self-recorded health status. The Household Dietary Diversity Score (HDDS), which constructed based on twelve food groups, is used to measure the level of food security. Rice is categorized into three groups according to their Glycemic Index (GI), high GI, medium GI and low GI, and the likelihood and impact made by food security on each rice consumption categories are estimated using a Two-part Model. The shares of each rice categories out of total rice consumption is considered as the dependent variable to exclude the endogeneity issue between rice consumption and the HDDS. The results indicate that the consumption of medium GI rice is likely to increase with the increasing household food security, but low GI varieties are not. Households in rural and estate sectors are less likely and Tamil ethnic group is more likely to consume low GI rice varieties. Further, an increase in food security significantly decreases the consumption share of low GI rice, while it increases the share of medium GI varieties. The consumption share of low GI rice is largely affected by the ethnic variability. The effects of food security on the likelihood of consuming high GI rice varieties and changing its shares are statistically insignificant. Accordingly, the study concludes that a higher level of food security does not ensure diabetes patients are consuming healthy rice varieties or reducing consumption of unhealthy varieties. Hence policy attention must be directed towards educating people for making healthy dietary choices. Further, the study provides a room for further studies as it reveals considerable ethnic and sectorial differences in making healthy dietary decisions.Keywords: diabetes, food security, glycemic index, rice consumption
Procedia PDF Downloads 100576 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode
Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel
Abstract:
In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode
Procedia PDF Downloads 191575 Experience of Two Major Research Centers in the Diagnosis of Cardiac Amyloidosis from Transthyretin
Authors: Ioannis Panagiotopoulos, Aristidis Anastasakis, Konstantinos Toutouzas, Ioannis Iakovou, Charalampos Vlachopoulos, Vasilis Voudris, Georgios Tziomalos, Konstantinos Tsioufis, Efstathios Kastritis, Alexandros Briassoulis, Kimon Stamatelopoulos, Alexios Antonopoulos, Paraskevi Exadaktylou, Evanthia Giannoula, Anastasia Katinioti, Maria Kalantzi, Evangelos Leontiadis, Eftychia Smparouni, Ioannis Malakos, Nikolaos Aravanis, Argyrios Doumas, Maria Koutelou
Abstract:
Introduction: Cardiac amyloidosis from Transthyretin (ATTR-CA) is an infiltrative disease characterized by the deposition of pathological transthyretin complexes in the myocardium. This study describes the characteristics of patients diagnosed with ATTR-CA from 2019 until present at the Nuclear Medicine Department of Onassis Cardiac Surgery Center and AHEPA Hospital. These centers have extensive experience in amyloidosis and modern technological equipment for its diagnosis. Materials and Methods: Records of consecutive patients (N=73) diagnosed with any type of amyloidosis were collected, analyzed, and prospectively followed. The diagnosis of amyloidosis was made using specific myocardial scintigraphy with Tc-99m DPD. Demographic characteristics, including age, gender, marital status, height, and weight, were collected in a database. Clinical characteristics, such as amyloidosis type (ATTR and AL), serum biomarkers (BNP, troponin), electrocardiographic findings, ultrasound findings, NYHA class, aortic valve replacement, device implants, and medication history, were also collected. Some of the most significant results are presented. Results: A total of 73 cases (86% male) were diagnosed with amyloidosis over four years. The mean age at diagnosis was 82 years, and the main symptom was dyspnea. Most patients suffered from ATTR-CA (65 vs. 8 with AL). Out of all the ATTR-CA patients, 61 were diagnosed with wild-type and 2 with two rare mutations. Twenty-eight patients had systemic amyloidosis with extracardiac involvement, and 32 patients had a history of bilateral carpal tunnel syndrome. Four patients had already developed polyneuropathy, and the diagnosis was confirmed by DPD scintigraphy, which is known for its high sensitivity. Among patients with isolated cardiac involvement, only 6 had left ventricular ejection fraction below 40%. The majority of ATTR patients underwent tafamidis treatment immediately after diagnosis. Conclusion: In conclusion, the experiences shared by the two centers and the continuous exchange of information provide valuable insights into the diagnosis and management of cardiac amyloidosis. Clinical suspicion of amyloidosis and early diagnostic approach are crucial, given the availability of non-invasive techniques. Cardiac scintigraphy with DPD can confirm the presence of the disease without the need for a biopsy. The ultimate goal still remains continuous education and awareness of clinical cardiologists so that this systemic and treatable disease can be diagnosed and certified promptly and treatment can begin as soon as possible.Keywords: amyloidosis, diagnosis, myocardial scintigraphy, Tc-99m DPD, transthyretin
Procedia PDF Downloads 86574 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 142573 Analysis of the Outcome of the Treatment of Osteoradionecrosis in Patients after Radiotherapy for Head and Neck Cancer
Authors: Petr Daniel Kovarik, Matt Kennedy, James Adams, Ajay Wilson, Andy Burns, Charles Kelly, Malcolm Jackson, Rahul Patil, Shahid Iqbal
Abstract:
Introduction: Osteoradionecrosis (ORN) is a recognised toxicity of radiotherapy (RT) for head and neck cancer (HNC). Existing literature lacks any generally accepted definition and staging system for this toxicity. Objective: The objective is to analyse the outcome of the surgical and nonsurgical treatments of ORN. Material and Method: Data on 2303 patients treated for HNC with radical or adjuvant RT or RT-chemotherapy from January 2010 - December 2021 were retrospectively analysed. Median follow-up to the whole group of patients was 37 months (range 0–148 months). Results: ORN developed in 185 patients (8.1%). The location of ORN was as follows; mandible=170, maxilla=10, and extra oral cavity=5. Multiple ORNs developed in 7 patients. 5 patients with extra oral cavity ORN were excluded from treatment analysis as the management is different. In 180 patients with oral cavity ORN, median follow-up was 59 months (range 5–148 months). ORN healed in 106 patients, treatment failed in 74 patients (improving=10, stable=43, and deteriorating=21). Median healing time was 14 months (range 3-86 months). Notani staging is available in 158 patients with jaw ORN with no previous surgery to the mandible (Notani class I=56, Notani class II=27, and Notani class III=76). 28 ORN (mandible=27, maxilla=1; Notani class I=23, Notani II=3, Notani III=1) healed spontaneously with a median healing time 7 months (range 3–46 months). In 20 patients, ORN developed after dental extraction, in 1 patient in the neomandible after radical surgery as a part of the primary treatment. In 7 patients, ORN developed and spontaneously healed in irradiated bone with no previous surgical/dental intervention. Radical resection of the ORN (segmentectomy, hemi-mandibulectomy with fibula flap) was performed in 43 patients (all mandible; Notani II=1, Notani III=39, Notani class was not established in 3 patients as ORN developed in the neomandible). 27 patients healed (63%); 15 patients failed (improving=2, stable=5, deteriorating=8). The median time from resection to healing was 6 months (range 2–30 months). 109 patients (mandible=100, maxilla=9; Notani I=3, Notani II=23, Notani III=35, Notani class was not established in 9 patients as ORN developed in the maxilla/neomandible) were treated conservatively using a combination of debridement, antibiotics and Pentoclo. 50 patients healed (46%) with a median healing time 14 months (range 3–70 months), 59 patients are recorded with persistent ORN (improving=8, stable=38, deteriorating=13). Out of 109 patients treated conservatively, 13 patients were treated with Pentoclo only (all mandible; Notani I=6, Notani II=3, Notani III=3, 1 patient with neomandible). In total, 8 patients healed (61.5%), treatment failed in 5 patients (stable=4, deteriorating=1). Median healing time was 14 months (range 4–24 months). Extra orally (n=5), 3 cases of ORN were in the auditory canal and 2 in mastoid. ORN healed in one patient (auditory canal after 32 months. Treatment failed in 4 patients (improving=3, stable=1). Conclusion: The outcome of the treatment of ORN remains in general, poor. Every effort should therefore be made to minimise the risk of development of this devastating toxicity.Keywords: head and neck cancer, radiotherapy, osteoradionecrosis, treatment outcome
Procedia PDF Downloads 91572 Analysis of Latest Fitness Trends in India
Authors: Amita Rana
Abstract:
From the ancient to modern times, the nature of fitness activities has varied. We can choose any form of exercise that is suitable for our particular need. Watchers of fitness trends say that the road to better health is paved with new possibilities along with some old ones that are poised to make a comeback. Educated, certified and experienced fitness professionals; strength training; fitness programmes for older adults; exercise and weight loss; children and obesity; personal training; core training; group personal training; Zumba and other dance workouts; functional fitness; yoga; comprehensive health promotion programmes at worksite; boot-camp; outdoor activities; reaching new markets; spinning; sport-specific training; worker incentive programmes; wellness coaching; and physician referrals are among the fitness trends included in worldwide surveys. However, trends related to fitness in India could be the same or different. Hence, the present paper makes an attempt to analyze the latest fitness trends in India. A total of eighteen (18) surveys were shortlisted on the basis of their relevance to the present topic of study and were arranged in descending order of their chronology. Content analysis was done after the preliminary set of data collection, which formed the basis of a group of data. Further, frequency and percentage were used to statistically represent the data. It can be concluded from the analysis of data regarding recent fitness trends in India that yoga dominates the fitness activity list, followed by numerous other activities including running, Zumba and sh’bam, boot camp, boxing, kickboxing, cycling, swimming, TRX, ass-pocalypse, ballet, biking, bokwa fitness, dance-iso-bic, masala bhangra, outdoor activities, pilates, planks, push-ups, sofa workouts, stairs Workouts, tabata training, and twerking. The body weight/ gym-specified/ strength training as well as high intensity interval training dominate the preferred workouts; followed by mixed work-outs, cross training work-outs, express work-outs, functional fitness, natural body movements, personalized training, and stay-at-home workouts. General areas that featured in the latest fitness trends in India demonstrates that the fitness is making an impact on all sections of the society be it children, women, older adults, senior citizens, worksite fitness. Fitness is becoming the lifestyle of the masses. People are doing exercise for weight-loss, combining diet with exercising; prefer sweating, making groups participate in fitness activities and wellness programmes. Technology is another area which has a high impact on the lives of people. They are using wearable technology for workout tracking and following numerous mobile friendly apps.Keywords: fitness, India, survey, trend
Procedia PDF Downloads 312571 Stainless Steel Degradation by Sulphide Mining
Authors: Aguasanta M. Sarmiento, Jose Miguel Davila, Juan Carlos Fortes, Maria Luisa de la Torre
Abstract:
Acid mine drainage (AMD) is an acidic leachate with high levels of metals and sulphates in solution, which seriously affects the durability and strength of metallic materials used in the construction of structural and mechanical components. This paper presents the results of the evolution over time of the reduction in tensile strength and defects in AISI 304 stainless steel in contact with acid mine drainage. For this purpose, a total of 30 bars with a diameter of 8 mm and a length of 14 cm were placed transversely in the course of a stream contaminated by AMD from the sulphide mines of the Iberian Pyritic Belt (SW Spain). This stream has average pH values of 2.6, a potential of 660 mV and average concentrations of 12 g/L of sulphates, 1.2 g/L of Fe, 191 mg/L of Zn, etc. Every two months of exposure, 6 stainless steel bars were extracted from the acid stream. They were subjected to surface roughness analysis carried out with the help of Mitutoyo Surftest SJ-210 surface roughness tester. The analysis was carried out at three different points on 5 specimens from each series. The average reading of each parameter is calculated in order to ensure the accuracy of the measurements and the surface coverage. Arithmetic mean roughness value (Ra), mean roughness depth (Rz) and root mean square roughness (Rq) were measured. Five specimens from each series were statically tensile tested using universal equipment (Servosis ME 403 of 200kN). The specimens were clamped at their ends with two grips for cylindrical sections, and the tensile force was applied at a constant speed of 0.5 kN/s, according to the requirements of standard UNE-EN ISO 6892-1: 2020. To determine the modulus of elasticity, limits close to 15% and 55% of the maximum load were used, depending on the course of each test. Field Emission Scanning Electron Microscopy (FESEM) was used to observe corrosion products and defects generated by exposure to AMD. Energy dispersive X-ray spectrometry (EDS) was used to analyze the chemical composition of the corrosion products formed. For this purpose, small pieces were cut from the resulting specimens, cleaned and embedded in epoxy resin. The results show that after only 5 months of exposure of AISI 304 stainless steel to the mining environment, the surface roughness increases significantly, with average depths almost 6 times greater than the initial one. Cracks are observed on the surface of the material, which increases in size with the time of exposure. A large number of grains with a composition of more than 57% Pb and 16% Sn can be observed inside these cracks. Tensile tests show a reduction in the resistance of this material after only two months of exposure. The results show the serious problems that would result from the use of this material for the use of mechanical components in a sulphide mining environment, not only because of the significant reduction in the lifetime of such components but also because of the implications for human safety.Keywords: Acid mine drainage, Corrosion, Mechanical properties, Stainless steel
Procedia PDF Downloads 5570 Detection of Egg Proteins in Food Matrices (2011-2021)
Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli
Abstract:
Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.Keywords: allergens, food, egg proteins, immunoassay
Procedia PDF Downloads 136569 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route
Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku
Abstract:
Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.Keywords: inducing labor, misoprostol, pregnancy termination, third trimester
Procedia PDF Downloads 184568 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 127567 Knowledge Creation Environment in the Iranian Universities: A Case Study
Authors: Mahdi Shaghaghi, Amir Ghaebi, Fariba Ahmadi
Abstract:
Purpose: The main purpose of the present research is to analyze the knowledge creation environment at a Iranian University (Alzahra University) as a typical University in Iran, using a combination of the i-System and Ba models. This study is necessary for understanding the determinants of knowledge creation at Alzahra University as a typical University in Iran. Methodology: To carry out the present research, which is an applied study in terms of purpose, a descriptive survey method was used. In this study, a combination of the i-System and Ba models has been used to analyze the knowledge creation environment at Alzahra University. i-System consists of 5 constructs including intervention (input), intelligence (process), involvement (process), imagination (process), and integration (output). The Ba environment has three pillars, namely the infrastructure, the agent, and the information. The integration of these two models resulted in 11 constructs which were as follows: intervention (input), infrastructure-intelligence, agent-intelligence, information-intelligence (process); infrastructure-involvement, agent-involvement, information-involvement (process); infrastructure-imagination, agent-imagination, information-imagination (process); and integration (output). These 11 constructs were incorporated into a 52-statement questionnaire and the validity and reliability of the questionnaire were examined and confirmed. The statistical population included the faculty members of Alzahra University (344 people). A total of 181 participants were selected through the stratified random sampling technique. The descriptive statistics, binomial test, regression analysis, and structural equation modeling (SEM) methods were also utilized to analyze the data. Findings: The research findings indicated that among the 11 research constructs, the levels of intervention, information-intelligence, infrastructure-involvement, and agent-imagination constructs were average and not acceptable. The levels of infrastructure-intelligence and information-imagination constructs ranged from average to low. The levels of agent-intelligence and information-involvement constructs were also completely average. The level of infrastructure-imagination construct was average to high and thus was considered acceptable. The levels of agent-involvement and integration constructs were above average and were in a highly acceptable condition. Furthermore, the regression analysis results indicated that only two constructs, viz. the information-imagination and agent-involvement constructs, positively and significantly correlate with the integration construct. The results of the structural equation modeling also revealed that the intervention, intelligence, and involvement constructs are related to the integration construct with the complete mediation of imagination. Discussion and conclusion: The present research suggests that knowledge creation at Alzahra University relatively complies with the combination of the i-System and Ba models. Unlike this model, the intervention, intelligence, and involvement constructs are not directly related to the integration construct and this seems to have three implications: 1) the information sources are not frequently used to assess and identify the research biases; 2) problem finding is probably of less concern at the end of studies and at the time of assessment and validation; 3) the involvement of others has a smaller role in the summarization, assessment, and validation of the research.Keywords: i-System, Ba model , knowledge creation , knowledge management, knowledge creation environment, Iranian Universities
Procedia PDF Downloads 99566 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 220565 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year
Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans
Abstract:
Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality
Procedia PDF Downloads 309564 Controlling Deforestation in the Densely Populated Region of Central Java Province, Banjarnegara District, Indonesia
Authors: Guntur Bagus Pamungkas
Abstract:
As part of a tropical country that is normally rich in forest land areas, Indonesia has always been in the world's spotlight due to its significantly increasing process of deforestation. In one hand, it is related to the mainstay for maintaining the sustainability of the earth's ecosystem functions. On the other hand, they also cover the various potential sources of the global economy. Therefore, it can always be the target of different scale of investors to excessively exploit them. No wonder the emergence of disasters in various characteristics always comes up. In fact, the deforestation phenomenon does not only occur in various forest land areas in the main islands of Indonesia but also includes Java Island, the most densely populated areas in the world. This island only remains the forest land of about 9.8% of the total forest land in Indonesia due to its long history of it, especially in Central Java Province, the most densely populated area in Java. Again, not surprisingly, this province belongs to the area with the highest frequency of disasters because of it, landslides in particular. One of the areas that often experience it is Banjarnegara District, especially in mountainous areas that lies in the range from 1000 to 3000 meters above sea level, where the remains of land forest area can easyly still be found. Even among them still leaves less untouchable tropical rain forest whose area also covers part of a neighboring district, Pekalongan, which is considered to be the rest of the world's little paradise on Earth. The district's landscape is indeed beautiful, especially in the Dieng area, a major tourist destination in Central Java Province after Borobudur Temple. However, annually hazardous always threatens this district due to this landslide disaster. Even, there was a tragic event that was buried with its inhabitants a few decades ago. This research aims to find part of the concept of effective forest management through monitoring the presence of remaining forest areas in this area. The research implemented monitoring of deforestation rates using the Stochastic Cellular Automata-Markov Chain (SCA-MC) method, which serves to provide a spatial simulation of land use and cover changes (LULCC). This geospatial process uses the Landsat-8 OLI image product with Thermal Infra-Red Sensors (TIRS) Band 10 in 2020 and Landsat 5 TM with TIRS Band 6 in 2010. Then it is also integrated with physical and social geography issues using the QGIS 2.18.11 application with the Mollusce Plugin, which serves to clarify and calculate the area of land use and cover, especially in forest areas—using the LULCC method, which calculates the rate of forest area reduction in 2010-2020 in Banjarnegara District. Since the dependence of this area on the use of forest land is quite high, concepts and preventive actions are needed, such as rehabilitation and reforestation of critical lands through providing proper monitoring and targeted forest management to restore its ecosystem in the future.Keywords: deforestation, populous area, LULCC method, proper control and effective forest management
Procedia PDF Downloads 135563 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78562 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 62561 Morphology, Qualitative, and Quantitative Elemental Analysis of Pheasant Eggshells in Thailand
Authors: Kalaya Sribuddhachart, Mayuree Pumipaiboon, Mayuva Youngsabanant-Areekijseree
Abstract:
The ultrastructure of 20 species of pheasant eggshells in Thailand, (Simese Fireback, Lophura diardi), (Silver Pheasant, Lophura nycthemera), (Kalij Pheasant, Lophura leucomelanos crawfurdii), (Kalij Pheasant, Lophura leucomelanos lineata), (Red Junglefowl, Gallus gallus spadiceus), (Crested Fireback, Lophura ignita rufa), (Green Peafowl, Pavo muticus), (Indian Peafowl, Pavo cristatus), (Grey Peacock Pheasant, Polyplectron bicalcaratum bicalcaratum), (Lesser Bornean Fireback, Lophura ignita ignita), (Green Junglefowl, Gallus varius), (Hume's Pheasant, Syrmaticus humiae humiae), (Himalayan Monal, Lophophorus impejanus), Golden Pheasant, Chrysolophus pictus, (Ring-Neck Pheasant, Phasianus sp.), (Reeves’s Pheasant, Syrmaticus reevesi), (Polish Chicken, Gallus sp.), (Brahma Chicken, Gallus sp.), (Yellow Golden Pheasant, Chrysolophus pictus luteus), and (Lady Amhersts Pheasant, Chrysolophus amherstiae) were studied by Secondary electron imaging (SEI) and Energy dispersive X-ray analysis (EDX) detectors of scanning electron microscope. Generally, all pheasant eggshells showed 3 layers of cuticle, palisade, and mammillary. The total thickness was ranging from 190.28±5.94-838.96±16.31µm. The palisade layer is the most thickness layer following by mammillary and cuticle layers. The palisade layer in all pheasant eggshells consisted of numerous vesicle holes that were firmly forming as network thorough the layer. The vesicle holes in all pheasant eggshells had difference porosity ranging from 0.44±0.11-0.23±0.05 µm. While the mammillary layer was the most compact layer with a variable shape (broad-base V and U-shape) connect to shell membrane. Elemental analysis by of 20 specie eggshells showed 9 apparent elements including carbon (C), oxygen (O), calcium (Ca), phosphorous (P), sulfur (S), magnesium (Mg), silicon (Si), aluminum (Al), and copper (Cu) at the percentage of 28.90- 8.33%, 60.64-27.61%, 55.30-14.49%, 1.97-0.03%, 0.08-0.03%, 0.50-0.16%, 0.30-0.04%, 0.06-0.02%, and 2.67-1.73%, respectively. It was found that Ca, C, and O showed highest elemental compositions, which essential for pheasant embryonic development, mainly presented as composited structure of calcium carbonate (CaCO3) more than 97%. Meanwhile, Mg, S, Si, Al, and P were major inorganic constituents of the eggshells which directly related to an increase of the shell hardness. Finally, the percentage of heavy metal copper (Cu) has been observed in 4 eggshell species. There are Golden Pheasant (2.67±0.16%), Indian Peafowl (2.61±0.13%), Green Peafowl (1.97±0.74%), and Silver Pheasant (1.73±0.11%), respectively. A non-significant difference was found in the percentages of 9 elements in all pheasant eggshells. This study is useful to provide the information of biology and taxonomic of pheasant study in Thailand for conservation.Keywords: pheasants eggshells, secondary electron imaging (SEI) and energy dispersive X-ray analysis (EDX), morphology, Thailand
Procedia PDF Downloads 234560 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers
Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin
Abstract:
Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.Keywords: anxiety, emotional valence, childhood, lexical access
Procedia PDF Downloads 286559 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review
Authors: R. Christopher, C. Brandt, N. Damons
Abstract:
Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity
Procedia PDF Downloads 177558 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar
Authors: Gary Peach, Furqan Hameed
Abstract:
Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey
Procedia PDF Downloads 243557 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research
Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón
Abstract:
Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish
Procedia PDF Downloads 92556 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries
Authors: Behzad Babaei, B. Gangadhara Prusty
Abstract:
The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress
Procedia PDF Downloads 99555 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India
Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria
Abstract:
Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph
Procedia PDF Downloads 325554 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 138553 Cognitive Behaviour Hypnotherapy as an Effective Intervention for Nonsuicidal Self Injury Disorder
Authors: Halima Sadia Qureshi, Urooj Sadiq, Noshi Eram Zaman
Abstract:
The goal of this study was to see how cognitive behavior hypnotherapy affected nonsuicidal self-injury. DSM 5 invites the researchers to explore the newly added condition under the chapter of conditions under further study named Nonsuicidal self-injury disorder. To date, no empirical sound intervention has been proven effective for NSSI as given in DSM 5. Nonsuicidal self-injury is defined by DSM 5 as harming one's self physically, without suicidal intention. Around 7.6% of teenagers are expected to fulfill the NSSI disorder criteria. 3 Adolescents, particularly university students, account for around 87 percent of self-harm studies. Furthermore, one of the risks associated with NSSI is an increased chance of suicide attempts, and in most cases, the cycle repeats again. 6 The emotional and psychological components of the illness might lead to suicide, either intentionally or unintentionally. 7 According to a research done at a Pakistani military hospital, over 80% of participants had no intention of committing suicide. Furthermore, it has been determined that improvements in NSSI prevention and intervention are necessary as a stand-alone strategy. The quasi-experimental study took place in Islamabad and Rawalpindi, Pakistan, from May 2019 to April 2020 and included students aged 18 to 25 years old from several institutions and colleges in the twin cities. According to the Diagnostic and Statistical Manual of Mental Disorders 5th edition, the individuals were assessed for >2 episodes without suicidal intent using the intentional self-harm questionnaire. The Clinician Administered Nonsuicidal Self-Injury Disorder Index (CANDI) was used to assess the individual for NSSI condition. Symptom checklist-90 (SCL-90) was used to screen the participants for differential diagnosis. Mclean Screening Instrument for Borderline Personality Disorder (MSI-BPD) was used to rule out the BPD cases. The selected participants, n=106 from the screening sample of 600, were selected. They were further screened to meet the inclusion and exclusion criteria, and the total of n=71 were split into two groups: intervention and control. The intervention group received cognitive behavior hypnotherapy for the next three months, whereas the control group received no treatment. After the period of three months, both the groups went through the post assessment, and after the three months’ period, follow-up assessment was conducted. The groups were evaluated, and SPSS 25 was used to analyse the data. The results showed that each of the two groups had 30 (50 percent) of the 60 participants. There were 41 males (68 percent) and 19 girls (32 percent) in all. The bulk of the participants were between the ages of 21 and 23. (48 percent). Self-harm events were reported by 48 (80 percent) of the pupils, and suicide ideation was found in 6 (ten percent). In terms of pre- and post-intervention values (d=4.90), post-intervention and follow-up assessment values (d=0.32), and pre-intervention and follow-up values (d=5.42), the study's effect size was good. The comparison of treatment and no-treatment groups revealed that treatment was more successful than no-treatment, F (1, 58) = 53.16, p.001. The results reveal that the treatment manual of CBH is effective for Nonsuicidal self-injury disorder.Keywords: NSSI, nonsuicidal self injury disorder, self-harm, self-injury, Cognitive behaviour hypnotherapy, CBH
Procedia PDF Downloads 180552 Abuse against Elderly Widows in India and Selected States: An Exploration
Authors: Rasmita Mishra, Chander Shekher
Abstract:
Background: Population ageing is an inevitable outcome of demographic transition. Due to increased life expectancy, the old age population in India and worldwide has increased, and it will continue to grow more alarmingly in the near future. There are redundant austerity that has been bestowed upon the widows, thus, the life of widows is never been easy in India. The loss of spouse along with other disadvantaged socioeconomic intermediaries like illiteracy and poverty often make the life of widows more difficult to live. Methodology: Ethical statement: The study used secondary data available in the public domain for its wider use in social research. Thus, there was no requirement of ethical consent in the present study. Data source: Building a Knowledge Base on Population Aging in India (BKPAI), 2011 dataset is used to fulfill the objectives of this study. It was carried out in seven states – Himachal Pradesh, Kerala, Maharashtra, Odisha, Punjab, Tamil Nadu, and West Bengal – having a higher percentage of the population in the age group 60 years and above compared to the national average. Statistical analysis: Descriptive and inferential statistics were used to understand the level of elderly widows and incidence of abuse against them in India and selected states. Bivariate and Trivariate analysis were carried out to check the pattern of abuse by selected covariates. Chi-Square test is used to verify the significance of the association. Further, Discriminant Analysis (DA) is carried out to understand which factor can separate out group of neglect and non-neglect elderly. Result: With the addition of 27 million from 2001 to 2011, the total elderly population in India is more than 100 million. Elderly females aged 60+ were more widows than their counterpart elderly males. This pattern was observed across selected states and at national level. At national level, more than one tenth (12 percent) of elderly experienced abuse in their lifetime. Incidence of abuse against elderly widows within family was considerably higher than the outside the family. This pattern was observed across the selected place and abuse in the study. In discriminant analysis, the significant difference between neglected and non-neglected elderly on each of the independent variables was examined using group mean and ANOVA. Discussion: The study is the first of its kind to assess the incidence of abuse against elderly widows using large-scale survey data. Another novelty of this study is that it has assessed for those states in India whereby the proportion of elderly is higher than the national average. Place and perpetrators involved in the abuse against elderly widows certainly envisaged the safeness in the present living arrangement of elderly widows. Conclusion: Due to the increasing life expectancy it is expected that the number of elderly will increase much faster than before. As biologically women live longer than men, there will be more women elderly than men. With respect to the living arrangement, after the demise of the spouse, elderly widows are more likely to live with their children who emerged as the main perpetrator of abuse.Keywords: elderly abuse, emotional abuse physical abuse, material abuse, psychological abuse, quality of life
Procedia PDF Downloads 425