Search results for: heading time
13045 A Journey to the Past: Hoşap Castle in Van
Authors: Muhammet Kurucu
Abstract:
Hoşap Castle, located in Gürpınar, Van, is one of the most important symbols of the city because it hosted sacred memories of its time. Besides the location and construction features of Güzelsu, in resort city of Van, Hoşap Castle is a great place with an architecture consisting of an outer fortress and the inner fortress. It is one of the Ottoman castles and was built in the 17th century by Sarı Süleyman who was known as bey of Mahmudi. Although some parts of Hoşap Castle have been destroyed by natural disasters, it has survived until today without total collapse and most places with excavations are revealed. In this study, present condition of the Hoşap Castle is observed and introduced briefly.Keywords: Güzelsu, Hoşap Castle, natural disasters, restoration, Van
Procedia PDF Downloads 27413044 Fracture and Fatigue Crack Growth Analysis and Modeling
Authors: Volkmar Nolting
Abstract:
Fatigue crack growth prediction has become an important topic in both engineering and non-destructive evaluation. Crack propagation is influenced by the mechanical properties of the material and is conveniently modelled by the Paris-Erdogan equation. The critical crack size and the total number of load cycles are calculated. From a Larson-Miller plot the maximum operational temperature can for a given stress level be determined so that failure does not occur within a given time interval t. The study is used to determine a reasonable inspection cycle and thus enhances operational safety and reduces costs.Keywords: fracturemechanics, crack growth prediction, lifetime of a component, structural health monitoring
Procedia PDF Downloads 4913043 Characters of Developing Commercial Employment Sub-Centres and Employment Density in Ahmedabad City
Authors: Bhaumik Patel, Amit Gotecha
Abstract:
Commercial centres of different hierarchy and sizes play a vital role in the growth and development of the city. Economic uncertainty and demand for space leads to more urban sprawl and emerging more commercial spaces. The study was focused on the understanding of various indicators affecting the commercial development that can help to solve many issues related to commercial urban development and can guide for future employment growth centre development, Accessibility, Infrastructure, Planning and development regulations and Market forces. The aim of the study was to review characteristics and identifying employment density of Commercial Employment Sub-centres by achieving objectives Understanding various employment sub-centres, Identifying characteristics and deriving behaviour of employment densities and Evaluating and comparing employment sub-centres for the Ahmedabad city. Commercial employment sub-centres one in old city (Kalupur), second in highly developed commercial (C.G.road-Ashram road) and third in the latest developing commercial area (Prahladnagar) were identified by distance from city centre, Land use diversity, Access to Major roads and Public transport, Population density in proximity, Complimentary land uses in proximity and Land price. Commercial activities were categorised into retail, wholesale and service sector and sub categorised into various activities. From the study, Time period of establishment of the unit is a critical parameter for commercial activity, building height, and land-use diversity. Employment diversity is also one parameter for the commercial centre. The old city has retail, wholesale and trading and higher commercial density concerning units and employment both. Prahladnagar area functioned as commercial due to market pressure and developed as more units rather than a requirement. Employment density is higher in the centre of the city, as far as distance increases from city centre employment density and unit density decreases. Characters of influencing employment density and unit density are distance from city centre, development type, establishment time period, building density, unit density, public transport accessibility and road connectivity.Keywords: commercial employment sub-centres, employment density, employment diversity, unit density
Procedia PDF Downloads 14213042 Pharmacokinetics and Safety of Pacritinib in Patients with Hepatic Impairment and Healthy Volunteers
Authors: Suliman Al-Fayoumi, Sherri Amberg, Huafeng Zhou, Jack W. Singer, James P. Dean
Abstract:
Pacritinib is an oral kinase inhibitor with specificity for JAK2, FLT3, IRAK1, and CSF1R. In clinical studies, pacritinib was well tolerated with clinical activity in patients with myelofibrosis. The most frequent adverse events (AEs) observed with pacritinib are gastrointestinal (diarrhea, nausea, and vomiting; mostly grade 1-2 in severity) and typically resolve within 2 weeks. A human ADME mass balance study demonstrated that pacritinib is predominantly cleared via hepatic metabolism and biliary excretion (>85% of administered dose). The major hepatic metabolite identified, M1, is not thought to materially contribute to the pharmacological activity of pacritinib. Hepatic diseases are known to impair hepatic blood flow, drug-metabolizing enzymes, and biliary transport systems and may affect drug absorption, disposition, efficacy, and toxicity. This phase 1 study evaluated the pharmacokinetics (PK) and safety of pacritinib and the M1 metabolite in study subjects with mild, moderate, or severe hepatic impairment (HI) and matched healthy subjects with normal liver function to determine if pacritinib dosage adjustments are necessary for patients with varying degrees of hepatic insufficiency. Study participants (aged 18-85 y) were enrolled into 4 groups based on their degree of HI as defined by Child-Pugh Clinical Assessment Score: mild (n=8), moderate (n=8), severe (n=4), and healthy volunteers (n=8) matched for age, BMI, and sex. Individuals with concomitant renal dysfunction or progressive liver disease were excluded. A single 400 mg dose of pacritinib was administered to all participants. Blood samples were obtained for PK evaluation predose and at multiple time points postdose through 168 h. Key PK parameters evaluated included maximum plasma concentration (Cmax), time to Cmax (Tmax), area under the plasma concentration time curve (AUC) from hour zero to last measurable concentration (AUC0-t), AUC extrapolated to infinity (AUC0-∞), and apparent terminal elimination half-life (t1/2). Following treatment, pacritinib was quantifiable for all study participants at 1 h through 168 h postdose. Systemic pacritinib exposure was similar between healthy volunteers and individuals with mild HI. However, there was a significant difference between those with moderate and severe HI and healthy volunteers with respect to peak concentration (Cmax) and plasma exposure (AUC0-t, AUC0-∞). Mean Cmax decreased by 47% and 57% respectively in participants with moderate and severe HI vs matched healthy volunteers. Similarly, mean AUC0-t decreased by 36% and 45% and mean AUC0-∞ decreased by 46% and 48%, respectively in individuals with moderate and severe HI vs healthy volunteers. Mean t1/2 ranged from 51.5 to 74.9 h across all groups. The variability on exposure ranged from 17.8% to 51.8% across all groups. Systemic exposure of M1 was also significantly decreased in study participants with moderate or severe HI vs. healthy participants and individuals with mild HI. These changes were not significantly dissimilar from the inter-patient variability in these parameters observed in healthy volunteers. All AEs were grade 1-2 in severity. Diarrhea and headache were the only AEs reported in >1 participant (n=4 each). Based on these observations, it is unlikely that dosage adjustments would be warranted in patients with mild, moderate, or severe HI treated with pacritinib.Keywords: pacritinib, myelofibrosis, hepatic impairment, pharmacokinetics
Procedia PDF Downloads 29813041 From a Top Sport Event to a Sporting Activity
Authors: Helge Rupprich, Elke Knisel
Abstract:
In a time of mediazation and reduced physical movement, it is important to change passivity (akinesa) into physical activity to improve health. The approach is to encourage children, junior athletes, recreational athletes, and semi-professional athletes to do sports while attending a top sport event. The concept has the slogan: get out off your seat and move! A top sport event of a series of professional beach volleyball tournaments with 330.000 life viewers, 13,70 million cumulative reach viewers and 215,13 million advertising contacts is used as framework for different sports didactic approaches, social integrative approaches and migration valuations. An important aim is to use the big radiant power of the top sport event to extract active participants from the viewers of the top sport event. Even if it is the goal to improve physical activity, it is necessary to differentiate between the didactic approaches. The first approach contains psycho motoric exercises with children (N=158) between two and five years which was used in the project ‘largest sandbox of the city’. The second approach is social integration and promotion of activity of students (N=54) in the form of a student beach volleyball tournament. The third approach is activity in companies. It is based on the idea of health motivation of employees (N=62) in a big beach volleyball tournament. Fourth approach is to improve the sports leisure time activities of recreational athletes (N=292) in different beach volleyball tournaments. Fifthly approach is to build a foreign friendly measure which is implemented in junior athlete training with the French and German junior national team (N=16). Sixthly approach is to give semi professional athletes a tournament to develop their relation to active life. Seventh approach is social integration for disadvantaged people (N=123) in form of training with professional athletes. The top sport beach volleyball tournament had 80 athletes (N=80) and 34.000 viewers. In sum 785 athletes (N=785) did sports in 13 days. Over 34.000 viewers where counted in the first three days of top sport event. The project was evaluated positively by the City of Dresden, Politics of Saxony and the participants and will be continued in Dresden and expanded for the season 2015 in Jena.Keywords: beach volleyball, event, sports didactic, sports project
Procedia PDF Downloads 49513040 Primary and Secondary Big Bangs Theory of Creation of Universe
Authors: Shyam Sunder Gupta
Abstract:
The current theory for the creation of the universe, the Big Bang theory, is widely accepted but leaves some unanswered questions. It does not explain the origin of the singularity or what causes the Big Bang. The theory of the Big Bang also does not explain why there is such a huge amount of dark energy and dark matter in our universe. Also, there is a question related to one universe or multiple universes which needs to be answered. This research addresses these questions using the Bhagvat Puran and other Vedic scriptures as the basis. There is a Unique Pure Energy Field that is eternal, infinite, and finest of all and never transforms when in its original form. The Carrier Particles of Unique Pure Energy are Param-anus- Fundamental Energy Particles. Param-anus and a combination of these particles create bigger particles from which the Universe gets created. For creation to initiate, Unique Pure Energy is represented in three phases: positive phase energy, neutral phase eternal time energy and negative phase energy. Positive phase energy further expands in three forms of creative energies (CE1, CE2andCE3). From CE1 energy, three energy modes, mode of activation, mode of action, and mode of darkness, were created. From these three modes, 16 Principles, subtlest forms of energies, namely Pradhan, Mahat-tattva, Time, Ego, Intellect, Mind, Sound, Space, Touch, Air, Form, Fire, Taste, Water, Smell, and Earth, get created. In the Mahat-tattva, dominant in the Mode of Darkness, CE1 energy creates innumerable primary singularities from seven principles: Pradhan, Mahat-tattva, Ego, Sky, Air, Fire, and Water. CE1 energy gets divided as CE2 and enters, along with three modes and time, in each singularity, and primary Big Bang takes place, and innumerable Invisible Universes get created. Each Universe has seven coverings of 7 principles, and each layer is 10 times thicker than the previous layer. By energy CE2, space in Invisible Universe under the coverings is divided into two halves. In the lower half, the process of evolution gets initiated, and seeds of 24 elements get created, out of which 5 fundamental elements, building blocks of matter, Sky, Air, Fire, Water and Earth, create seeds of stars, planets, galaxies and all other matter. Since 5 fundamental elements get created out of the mode of darkness, it explains why there is so much dark energy and dark matter in our Universe. This process of creation, in the lower half of Invisible universe continues for 2.16 billion years. Further, in the lower part of the energy field, exactly at the Centre of Invisible Universe, Secondary Singularity is created, through which, by force of Mode of Action, Secondary Big Bang takes place and Visible Universe gets created in the shape of Lotus Flower, expanding into upper part. Visible matter starts appearing after a gap of 360,000 years. Within the Visible Universe, a small part gets created known as the Phenomenal Material World, which is our Solar System, the sun being in the Centre. Diameter of Solar planetary system is 6.4 billion km.Keywords: invisible universe, phenomenal material world, primary Big Bang, secondary Big Bang, singularities, visible universe
Procedia PDF Downloads 8913039 Construction of Pile Foundation Using Slow and Old Equipments at Srinagar, India
Authors: Azmat Hussain
Abstract:
Great Taj Mahal is built on well foundation. Well foundation can be constructed on the dry bed or after making sand Island. Cassions are relatively easy to construct provide sinking operations are smooth without much hindrance. Well foundation have many constructional difficulties, viz prolonged sinking period, tilting etc. These problems become worse and take more time when working season is winter. Especially in Indian Areas like Jammu & Kashmir (Srinagar) where technology lacks. The only thing Engineers can do is to wait till working conditions become suitable. A case study is presented in the paper exploring the feasibility of pile foundation.Keywords: well foundation, pile foundation, equipments used, pile construction
Procedia PDF Downloads 27013038 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia
Authors: Nguyen-Thanh Son
Abstract:
Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.Keywords: MODIS, flood, mapping, Cambodia
Procedia PDF Downloads 12613037 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment
Authors: Pedro Llanos, Diego García
Abstract:
This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin
Procedia PDF Downloads 11613036 Changes in the fecal Microbiome of Periparturient Dairy Cattle and Associations with the Onset of Salmonella Shedding
Authors: Lohendy Munoz-Vargas, Stephen O. Opiyo, Rose Digianantonio, Michele L. Williams, Asela Wijeratne, Gregory Habing
Abstract:
Non-typhoidal Salmonella enterica is a zoonotic pathogen with critical importance in animal and public health. The persistence of Salmonella on farms affects animal productivity and health, and represents a risk for food safety. The intestinal microbiota plays a fundamental role in the colonization and invasion of this ubiquitous microorganism. To overcome the colonization resistance imparted by the gut microbiome, Salmonella uses invasion strategies and the host inflammatory response to survive, proliferate, and establish infections with diverse clinical manifestations. Cattle serve as reservoirs of Salmonella, and periparturient cows have high prevalence of Salmonella shedding; however, to author`s best knowledge, little is known about the association between the gut microbiome and the onset of Salmonella shedding during the periparturient period. Thus, the objective of this study was to assess the association between changes in bacterial communities and the onset of Salmonella shedding in cattle approaching parturition. In a prospective cohort study, fecal samples from 98 dairy cows originating from four different farms were collected at four time points relative to calving (-3 wks, -1 wk, +1 wk, +3 wks). All 392 samples were cultured for Salmonella. Sequencing of the V4 region of the 16S rRNA gene using the Illumina platform was completed to evaluate the fecal microbiome in a selected sample subset. Analyses of microbial composition, diversity, and structure were performed according to time points, farm, and Salmonella onset status. Individual cow fecal microbiomes, predominated by Bacteroidetes, Firmicutes, Spirochaetes, and Proteobacteria phyla, significantly changed before and after parturition. Microbial communities from different farms were distinguishable based on multivariate analysis. Although there were significant differences in some bacterial taxa between Salmonella positive and negative samples, our results did not identify differences in the fecal microbial diversity or structure for cows with and without the onset of Salmonella shedding. These data suggest that determinants other than the significant changes in the fecal microbiome influence the periparturient onset of Salmonella shedding in dairy cattle.Keywords: dairy cattle, microbiome, periparturient, Salmonella
Procedia PDF Downloads 17313035 A Longitudinal Study of the Readability of the Chairman’s Narratives in Corporate Reports: Malaysian Evidence
Authors: Azhar Abdul Rahman
Abstract:
This paper examines the readability of the chairman’s narratives, as determined by the Flesch score, of a Malaysian public listed company’s corporate reports from 1962 to 2009. It partially supports earlier studies which demonstrated that corporate reports were difficult to read, and had shown very negligible decrease in difficulty over time. Net profit to sales and readability was significantly positively correlated but number of financial statements was significantly negatively correlated with readability.Keywords: chairman’s narratives, corporate communications, readability, longitudinal
Procedia PDF Downloads 45313034 TiO2 Solar Light Photocatalysis a Promising Treatment Method of Wastewater with Trinitrotoluene Content
Authors: Ines Nitoi, Petruta Oancea, Lucian Constantin, Laurentiu Dinu, Maria Crisan, Malina Raileanu, Ionut Cristea
Abstract:
2,4,6-Trinitrotoluene (TNT) is the most common pollutant identified in wastewater generated from munitions plants where this explosive is synthesized or handled (munitions load, assembly and pack operations). Due to their toxic and suspected carcinogenic characteristics, nitroaromatic compounds like TNT are included on the list of prioritary pollutants and strictly regulated in EU countries. Since their presence in water bodies is risky for human health and aquatic life, development of powerful, modern treatment methods like photocatalysis are needed in order to assures environmental pollution mitigation. The photocatalytic degradation of TNT was carried out at pH=7.8, in aqueous TiO2 based catalyst suspension, under sunlight irradiation. The enhanced photo activity of catalyst in visible domain was assured by 0.5% Fe doping. TNT degradation experiments were performed using a tubular collector type solar photoreactor (26 UV permeable silica glass tubes series connected), plug in a total recycle loops. The influence of substrate concentration and catalyst dose on the pollutant degradation and mineralization by-products (NO2-, NO3-, NH4+) formation efficiencies was studied. In order to compare the experimental results obtained in various working conditions, the pollutant and mineralization by-products measured concentrations have been considered as functions of irradiation time and cumulative photonic energy Qhν incident on the reactor surface (kJ/L). In the tested experimental conditions, at tens mg/L pollutant concentration, increase of 0,5%-TiO2 dose up to 200mg/L leads to the enhancement of CB degradation efficiency. Since, doubling of TNT content has a negative effect on pollutant degradation efficiency, in similar experimental condition, prolonged irradiation time from 360 to 480 min was necessary in order to assures the compliance of treated effluent with limits imposed by EU legislation (TNT ≤ 10µg/L).Keywords: wastewater treatment, TNT, photocatalysis, environmental engineering
Procedia PDF Downloads 35713033 Regulation of Apoptosis in Human Lung Cancer NCI-H226 Cells through Caspase – Dependent Mechanism by Benjakul Extract
Authors: Pintusorn Hansakul, Ruchilak Rattarom, Arunporn Itharat
Abstract:
Background: Benjakul, a Thai traditional herbal formulation, comprises of five plants: Piper chaba, Piper sarmentosum, Piper interruptum, Plumbago indica, and Zingiber officinale. It has been widely used to treat cancer patients in the context of folk medicine in Thailand. This study aimed to investigate the cytotoxic effect of the ethanol extract of Benjakul against three non-small cell lung cancer (NSCLC) cell lines (NCI-H226, A549, COR-L23), small cell lung cancer (SCLC) cell line NCI-H1688 and normal lung fibroblast cell line MRC-5. The study further examined the molecular mechanisms underlying its cytotoxicity via induction of apoptosis in NCI-H226 cells. Methods: The cytotoxic effect of Benjakul was determined by SRB assay. The effect of Benjakul on cell cycle distribution was assessed by flow cytometric analysis. The apoptotic effects of Benjakul were determined by sub-G1 quantitation and Annexin V-FITC/PI flow cytometric analyses as well as by changes in caspase-3 activity. Results: Benjakul exerted potent cytotoxicity on NCI-H226 and A549 cells but lower cytotoxicity on COR-L23 and NCI-H1688 cells without any cytotoxic effect on normal cells. Molecular studies showed that Benjakul extract induced G2/M phase arrest in human NCI-H226 cells in a dose-dependent manner. The highest concentration of Benjakul (150 μg/ml) led to the highest increase in the G2/M population at 12 h, followed by the highest increase in the sub-G1 population (apoptotic cells) at 60 h. Benjakul extract also induced early apoptosis (AnnexinV +/PI−) in NCI-H226 cells in a dose- and time- dependent manner. Moreover, treatment with 150 μg/ml Benjakul extract for 36 h markedly increased caspase-3 activity by 3.5-fold, and pretreatment with the general caspase inhibitor z-VAD-fmk completely abolished such activity. Conclusions: This study reveals for the first time the regulation of apoptosis in human lung cancer NCI-H226 cells through caspase-dependent mechanism by Benjakul extract.Keywords: apoptosis, Benjakul, caspase activation, cytotoxicity
Procedia PDF Downloads 44313032 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 13613031 Respiratory Health and Air Movement Within Equine Indoor Arenas
Authors: Staci McGill, Morgan Hayes, Robert Coleman, Kimberly Tumlin
Abstract:
The interaction and relationships between horses and humans have been shown to be positive for physical, mental, and emotional wellbeing, however equine spaces where these interactions occur do include some environmental risks. There are 1.7 million jobs associated with the equine industry in the United States in addition to recreational riders, owners, and volunteers who interact with horses for substantial amounts of time daily inside built structures. One specialized facility, an “indoor arena” is a semi-indoor structure used for exercising horses and exhibiting skills during competitive events. Typically, indoor arenas have a sand or sand mixture as the footing or surface over which the horse travels, and increasingly, silica sand is being recommended due to its durable nature. It was previously identified in a semi-qualitative survey that the majority of individuals using indoor arenas have environmental concerns with dust. 27% (90/333) of respondents reported respiratory issues or allergy-like symptoms while riding with 21.6% (71/329) of respondents reporting these issues while standing on the ground observing or teaching. Frequent headaches and/or lightheadedness was reported in 9.9% (33/333) of respondents while riding and in 4.3% 14/329 while on the ground. Horse respiratory health is also negatively impacted with 58% (194/333) of respondents indicating horses cough during or after time in the indoor arena. Instructors who spent time in indoor arenas self-reported more respiratory issues than those individuals who identified as smokers, highlighting the health relevance of understanding these unique structures. To further elucidate environmental concerns and self-reported health issues, 35 facility assessments were conducted in a cross-sectional sampling design in the states of Kentucky and Ohio (USA). Data, including air speeds, were collected in a grid fashion at 15 points within the indoor arenas and then mapped spatially using krigging in ARCGIS. From the spatial maps, standard variances were obtained and differences were analyzed using multivariant analysis of variances (MANOVA) and analysis of variances (ANOVA). There were no differences for the variance of the air speeds in the spaces for facility orientation, presence and type of roof ventilation, climate control systems, amount of openings, or use of fans. Variability of the air speeds in the indoor arenas was 0.25 or less. Further analysis yielded that average air speeds within the indoor arenas were lower than 100 ft/min (0.51 m/s) which is considered still air in other animal facilities. The lack of air movement means that dust clearance is reliant on particle size and weight rather than ventilation. While further work on respirable dust is necessary, this characterization of the semi-indoor environment where animals and humans interact indicates insufficient air flow to eliminate or reduce respiratory hazards. Finally, engineering solutions to address air movement deficiencies within indoor arenas or mitigate particulate matter are critical to ensuring exposures do not lead to adverse health outcomes for equine professionals, volunteers, participants, and horses within these spaces.Keywords: equine, indoor arena, ventilation, particulate matter, respiratory health
Procedia PDF Downloads 11613030 D-Epi App: Mobile Application to Control Sodium Valproat Administration in Children with Idiopatic Epilepsy in Indonesia
Authors: Nyimas Annissa Mutiara Andini
Abstract:
There are 325,000 children younger than age 15 in the U.S. have epilepsy. In Indonesia, 40% of 3,5 millions cases of epilepsy happens in children. The most common type of epilepsy, which affects 6 out of 10 people with the disorder, is called idiopathic epilepsy and which has no identifiable cause. One of the most commonly used medications in the treatment of this childhood epilepsy is sodium valproate. Administration of sodium valproat in children has a problem to fail. Nearly 60% of pediatric patients known were mildly, moderately, or severely non-adherent with therapy during the first six months of treatment. Many parents or caregiver took far less medication than prescribed, and the treatment-adherence pattern for the majority of patients was established during the first month of treatment. 42% of the patients were almost always given their medications as prescribed but 13% had very poor adherence even in the early weeks and months of treatment. About 7% of patients initially gave the medication correctly 90% of the time, but adherence dropped to around 20% within six months of starting treatment. Over the six months of observation, the total missing of administration is about four out of 14 doses in any given week. This fail can cause the epilepsy to relapse. Whereas, current reported epilepsy disorder were significantly more likely than those never diagnosed to experience depression (8% vs 2%), anxiety (17% vs 3%), attention-deficit/hyperactivity disorder (23% vs 6%), developmental delay (51% vs 3%), autism/autism spectrum disorder (16% vs 1%), and headaches (14% vs 5%) (all P< 0.05). They had a greater risk of limitation in the ability to do things (relative risk: 9.22; 95% CI: 7.56–11.24), repeating a school grade (relative risk: 2.59; CI: 1.52–4.40), and potentially having unmet medical and mental health needs. In the other side, technology can help to make our life easier. One of the technology, that we can use is a mobile application. A mobile app is a software program we can download and access directly using our phone. Indonesians are highly mobile centric. They use, on average, 6.7 applications over a 30 day period. This paper is aimed to describe an application that could help to control a sodium valproat administration in children; we call it as D-Epi app. D-Epi app is a downloadable application that can help parents or caregiver alert by a timer-related application to warn whether it is the time to administer the sodium valproat. It works not only as a standard alarm, but also inform important information about the drug and emergency stuffs to do to children with epilepsy. This application could help parents and caregiver to take care a child with epilepsy in Indonesia.Keywords: application, children, D-Epi, epilepsy
Procedia PDF Downloads 28013029 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation
Authors: Kausar Harun, Ahmad Azmin Mohamad
Abstract:
Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles
Procedia PDF Downloads 30913028 An Iberian Study about Location of Parking Areas for Dangerous Goods
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
When lorries transport dangerous goods, there exist some legal stipulations in the European Union for assuring the security of the rest of road users as well as of those goods being transported. At this respect, lorry drivers cannot park in usual parking areas, because they must use parking areas with special conditions, including permanent supervision of security personnel. Moreover, drivers are compelled to satisfy additional regulations about resting and driving times, which involve in the practical possibility of reaching the suitable parking areas under these time parameters. The “European Agreement concerning the International Carriage of Dangerous Goods by Road” (ADR) is the basic regulation on transportation of dangerous goods imposed under the recommendations of the United Nations Economic Commission for Europe. Indeed, nowadays there are no enough parking areas adapted for dangerous goods and no complete study have suggested the best locations to build new areas or to adapt others already existing to provide the areas being necessary so that lorry drivers can follow all the regulations. The goal of this paper is to show how many additional parking areas should be built in the Iberian Peninsula to allow that lorry drivers may park in such areas under their restrictions in resting and driving time. To do so, we have modeled the problem via graph theory and we have applied a new efficient algorithm which determines an optimal solution for the problem of locating new parking areas to complement those already existing in the ADR for the Iberian Peninsula. The solution can be considered minimal since the number of additional parking areas returned by the algorithm is minimal in quantity. Obviously, graph theory is a natural way to model and solve the problem here proposed because we have considered as nodes: the already-existing parking areas, the loading-and-unloading locations and the bifurcations of roads; while each edge between two nodes represents the existence of a road between both nodes (the distance between nodes is the edge's weight). Except for bifurcations, all the nodes correspond to parking areas already existing and, hence, the problem corresponds to determining the additional nodes in the graph such that there are less up to 100 km between two nodes representing parking areas. (maximal distance allowed by the European regulations).Keywords: dangerous goods, parking areas, Iberian peninsula, graph-based modeling
Procedia PDF Downloads 58013027 Interprofessional School-Based Mental Health Services for Rural Adolescents in South Australia
Authors: Garreth Kestell, Lukah Dykes, Danielle Zerk, Kyla Trewartha, Rhianon Marshall, Elena Rudnik
Abstract:
Adolescent mental health is an international priority and the impact of innovative service models must be evaluated. Secondary school-based mental health services (SBMHS) involving private general practitioners and psychologists are a model of care being trialed in South Australia. Measures of depression, anxiety, and stress are routinely collected throughout psychotherapy sessions. This research set out to quantify the impact of psychotherapy for rural adolescents in a school setting and explore the importance of session frequency. Methods: Demographics, session date and DASS21 scores from students (n=65) seen in 2016 by three psychologists working at the SBMHS were recorded. Students were aged 13-18 years (M=15.43, SD= 1.24), mostly female (F=51, M=14), attended between 1 and 23 sessions with a median of 6 sessions (MAD 5.93) in one-year. The treating psychologist collected self-administered DASS21 scores. A mixed model analysis was used with age, sex, treating psychologist, months from first session, and session number as fixed effects, with response variables of DASS depression, anxiety, and stress scores. Results: 71.5% were classified as having extreme or severe anxiety and half had extreme or severe depression and/or stress scores. On average males had a greater increase in DASS scores over time but males attending more sessions benefited most from therapy. Discussion: Psychologists are treating rural adolescents in schools for severe anxiety, depression, and stress. This pilot study indicates that a predictive model combining demographics, session frequency, and DASS scores may help identify who is most likely to benefit from individual psychotherapy. Variations in DAS scores of individuals over time indicate the need for the collection of information such as living situation and exposure to alcohol. A larger sample size and additional data are currently being collected to allow for a more robust analysis.Keywords: adolescent health, psychotherapy, school based mental health services, DAS21
Procedia PDF Downloads 16613026 Exploring Digital Media’s Impact on Sports Sponsorship: A Global Perspective
Authors: Sylvia Chan-Olmsted, Lisa-Charlotte Wolter
Abstract:
With the continuous proliferation of media platforms, there have been tremendous changes in media consumption behaviors. From the perspective of sports sponsorship, while there is now a multitude of platforms to create brand associations, the changing media landscape and shift of message control also mean that sports sponsors will have to take into account the nature of and consumer responses toward these emerging digital media to devise effective marketing strategies. Utilizing the personal interview methodology, this study is qualitative and exploratory in nature. A total of 18 experts from European and American academics, sports marketing industry, and sports leagues/teams were interviewed to address three main research questions: 1) What are the major changes in digital technologies that are relevant to sports sponsorship; 2) How have digital media influenced the channels and platforms of sports sponsorship; and 3) How have these technologies affected the goals, strategies, and measurement of sports sponsorship. The study found that sports sponsorship has moved from consumer engagement, engagement measurement, and consequences of engagement on brand behaviors to micro-targeting one on one, engagement by context, time, and space, and activation and leveraging based on tracking and databases. From the perspective of platforms and channels, the use of mobile devices is prominent during sports content consumption. Increasing multiscreen media consumption means that sports sponsors need to optimize their investment decisions in leagues, teams, or game-related content sources, as they need to go where the fans are most engaged in. The study observed an imbalanced strategic leveraging of technology and digital infrastructure. While sports leagues have had less emphasis on brand value management via technology, sports sponsors have been much more active in utilizing technologies like mobile/LBS tools, big data/user info, real-time marketing and programmatic, and social media activation. Regardless of the new media/platforms, the study found that integration and contextualization are the two essential means of improving sports sponsorship effectiveness through technology. That is, how sponsors effectively integrate social media/mobile/second screen into their existing legacy media sponsorship plan so technology works for the experience/message instead of distracting fans. Additionally, technological advancement and attention economy amplify the importance of consumer data gathering, but sports consumer data does not mean loyalty or engagement. This study also affirms the benefit of digital media as they offer viral and pre-event activations through storytelling way before the actual event, which is critical for leveraging brand association before and after. That is, sponsors now have multiple opportunities and platforms to tell stories about their brands for longer time period. In summary, digital media facilitate fan experience, access to the brand message, multiplatform/channel presentations, storytelling, and content sharing. Nevertheless, rather than focusing on technology and media, today’s sponsors need to define what they want to focus on in terms of content themes that connect with their brands and then identify the channels/platforms. The big challenge for sponsors is to play to the venues/media’s specificity and its fit with the target audience and not uniformly deliver the same message in the same format on different platforms/channels.Keywords: digital media, mobile media, social media, technology, sports sponsorship
Procedia PDF Downloads 29413025 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 14413024 A Graph Theoretic Algorithm for Bandwidth Improvement in Computer Networks
Authors: Mehmet Karaata
Abstract:
Given two distinct vertices (nodes) source s and target t of a graph G = (V, E), the two node-disjoint paths problem is to identify two node-disjoint paths between s ∈ V and t ∈ V . Two paths are node-disjoint if they have no common intermediate vertices. In this paper, we present an algorithm with O(m)-time complexity for finding two node-disjoint paths between s and t in arbitrary graphs where m is the number of edges. The proposed algorithm has a wide range of applications in ensuring reliability and security of sensor, mobile and fixed communication networks.Keywords: disjoint paths, distributed systems, fault-tolerance, network routing, security
Procedia PDF Downloads 44213023 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques
Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev
Abstract:
Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.Keywords: data analysis, demand modeling, healthcare, medical facilities
Procedia PDF Downloads 14413022 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 8913021 Feeling Sorry for Some Creditors
Authors: Hans Tjio, Wee Meng Seng
Abstract:
The interaction of contract and property has always been a concern in corporate and commercial law, where there are internal structures created that may not match the externally perceived image generated by the labels attached to those structures. We will focus, in particular, on the priority structures created by affirmative asset partitioning, which have increasingly come under challenge by those attempting to negotiate around them. The most prominent has been the AT1 bonds issued by Credit Suisse which were wiped out before its equity when the troubled bank was acquired by UBS. However, this should not have come as a surprise to those whose “bonds” had similarly been “redeemed” upon the occurrence of certain reference events in countries like Singapore, Hong Kong and Taiwan during their Minibond crisis linked to US sub-prime defaults. These were derivatives classified as debentures and sold as such. At the same time, we are again witnessing “liabilities” seemingly ranking higher up the balance sheet ladder, finding themselves lowered in events of default. We will examine the mechanisms holders of perpetual securities or preference shares have tried to use to protect themselves. This is happening against a backdrop that sees a rise in the strength of private credit and inter-creditor conflicts. The restructuring regime of the hybrid scheme in Singapore now, while adopting the absolute priority rule in Chapter 11 as the quid pro quo for creditor cramdown, does not apply to shareholders and so exempts them from cramdown. Complicating the picture further, shareholders are not exempted from cramdown in the Dutch scheme, but it adopts a relative priority rule. At the same time, the important UK Supreme Court decision in BTI 2014 LLC v Sequana [2022] UKSC 25 has held that directors’ duties to take account of creditor interests are activated only when a company is almost insolvent. All this has been complicated by digital assets created by businesses. Investors are quite happy to have them classified as property (like a thing) when it comes to their transferability, but then when the issuer defaults to have them seen as a claim on the business (as a choice in action), that puts them at the level of a creditor. But these hidden interests will not show themselves on an issuer’s balance sheet until it is too late to be considered and yet if accepted, may also prevent any meaningful restructuring.Keywords: asset partitioning, creditor priority, restructuring, BTI v Sequana, digital assets
Procedia PDF Downloads 7613020 Physical, Chemical and Mineralogical Characterization of Construction and Demolition Waste Produced in Greece
Authors: C. Alexandridou, G. N. Angelopoulos, F. A. Coutelieris
Abstract:
Construction industry in Greece consumes annually more than 25 million tons of natural aggregates originating mainly from quarries. At the same time, more than 2 million tons of construction and demolition waste are deposited every year, usually without control, therefore increasing the environmental impact of this sector. A potential alternative for saving natural resources and minimize landfilling, could be the recycling and re-use of Concrete and Demolition Waste (CDW) in concrete production. Moreover, in order to conform to the European legislation, Greece is obliged to recycle non-hazardous construction and demolition waste to a minimum of 70% by 2020. In this paper characterization of recycled materials - commercially and laboratory produced, coarse and fine, Recycled Concrete Aggregates (RCA) - has been performed. Namely, X-Ray Fluorescence and X-ray diffraction (XRD) analysis were used for chemical and mineralogical analysis respectively. Physical properties such as particle density, water absorption, sand equivalent and resistance to fragmentation were also determined. This study, first time made in Greece, aims at outlining the differences between RCA and natural aggregates and evaluating their possible influence in concrete performance. Results indicate that RCA’s chemical composition is enriched in Si, Al, and alkali oxides compared to natural aggregates. X-ray diffraction (XRD) analyses results indicated the presence of calcite, quartz and minor peaks of mica and feldspars. From all the evaluated physical properties of coarse RCA, only water absorption and resistance to fragmentation seem to have a direct influence on the properties of concrete. Low Sand Equivalent and significantly high water absorption values indicate that fine fractions of RCA cannot be used for concrete production unless further processed. Chemical properties of RCA in terms of water soluble ions are similar to those of natural aggregates. Four different concrete mixtures were produced and examined, replacing natural coarse aggregates with RCA by a ratio of 0%, 25%, 50% and 75% respectively. Results indicate that concrete mixtures containing recycled concrete aggregates have a minor deterioration of their properties (3-9% lower compression strength at 28 days) compared to conventional concrete containing the same cement quantity.Keywords: chemical and physical characterization, compressive strength, mineralogical analysis, recycled concrete aggregates, waste management
Procedia PDF Downloads 23413019 The Development of a Digitally Connected Factory Architecture to Enable Product Lifecycle Management for the Assembly of Aerostructures
Authors: Nicky Wilson, Graeme Ralph
Abstract:
Legacy aerostructure assembly is defined by large components, low build rates, and manual assembly methods. With an increasing demand for commercial aircraft and emerging markets such as the eVTOL (electric vertical take-off and landing) market, current methods of manufacturing are not capable of efficiently hitting these higher-rate demands. This project will look at how legacy manufacturing processes can be rate enabled by taking a holistic view of data usage, focusing on how data can be collected to enable fully integrated digital factories and supply chains. The study will focus on how data is flowed both up and down the supply chain to create a digital thread specific to each part and assembly while enabling machine learning through real-time, closed-loop feedback systems. The study will also develop a bespoke architecture to enable connectivity both within the factory and the wider PLM (product lifecycle management) system, moving away from traditional point-to-point systems used to connect IO devices to a hub and spoke architecture that will exploit report-by-exception principles. This paper outlines the key issues facing legacy aircraft manufacturers, focusing on what future manufacturing will look like from adopting Industry 4 principles. The research also defines the data architecture of a PLM system to enable the transfer and control of a digital thread within the supply chain and proposes a standardised communications protocol to enable a scalable solution to connect IO devices within a production environment. This research comes at a critical time for aerospace manufacturers, who are seeing a shift towards the integration of digital technologies within legacy production environments, while also seeing build rates continue to grow. It is vital that manufacturing processes become more efficient in order to meet these demands while also securing future work for many manufacturers.Keywords: Industry 4, digital transformation, IoT, PLM, automated assembly, connected factories
Procedia PDF Downloads 7913018 Experimental Investigation of the Out-of-Plane Dynamic Behavior of Adhesively Bonded Composite Joints at High Strain Rates
Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Ben Yahia
Abstract:
In this investigation, an experimental technique in which the dynamic response, damage kinetic and heat dissipation are measured simultaneously during high strain rates on adhesively bonded joints materials. The material used in this study is widely used in the design of structures for military applications. It was composed of a 45° Bi-axial fiber-glass mat of 0.286 mm thickness in a Polyester resin matrix. In adhesive bonding, a NORPOL Polyvinylester of 1 mm thickness was used to assemble the composite substrate. The experimental setup consists of a compression Split Hopkinson Pressure Bar (SHPB), a high-speed infrared camera and a high-speed Fastcam rapid camera. For the dynamic compression tests, 13 mm x 13 mm x 9 mm samples for out-of-plane tests were considered from 372 to 1030 s-1. Specimen surface is controlled and monitored in situ and in real time using the high-speed camera which acquires the damage progressive in specimens and with the infrared camera which provides thermal images in time sequence. Preliminary compressive stress-strain vs. strain rates data obtained show that the dynamic material strength increases with increasing strain rates. Damage investigations have revealed that the failure mainly occurred in the adhesive/adherent interface because of the brittle nature of the polymeric adhesive. Results have shown the dependency of the dynamic parameters on strain rates. Significant temperature rise was observed in dynamic compression tests. Experimental results show that the temperature change depending on the strain rate and the damage mode and their maximum exceed 100 °C. The dependence of these results on strain rate indicates that there exists a strong correlation between damage rate sensitivity and heat dissipation, which might be useful when developing damage models under dynamic loading tacking into account the effect of the energy balance of adhesively bonded joints.Keywords: adhesive bonded joints, Hopkinson bars, out-of-plane tests, dynamic compression properties, damage mechanisms, heat dissipation
Procedia PDF Downloads 21213017 Enzymatic Degradation of Poly (Butylene Adipate Terephthalate) Copolymer Using Lipase B From Candida Antarctica and Effect of Poly (Butylene Adipate Terephthalate) on Plant Growth
Authors: Aqsa Kanwal, Min Zhang, Faisal Sharaf, Li Chengtao
Abstract:
The globe is facing increasing challenges of plastic pollution due to single-use of plastic-based packaging material. The plastic material is continuously being dumped into the natural environment, which causes serious harm to the entire ecosystem. Polymer degradation in nature is very difficult, so the use of biodegradable polymers instead of conventional polymers can mitigate this issue. Due to the good mechanical properties and biodegradability, aliphatic-aromatic polymers are being widely commercialized. Due to the advancement in molecular biology, many studies have reported specific microbes that can effectively degrade PBAT. Aliphatic polyesters undergo hydrolytic cleavage of ester groups, so they can be easily degraded by microorganisms. In this study, we investigated the enzymatic degradation of poly (butylene adipate terephthalate) (PBAT) copolymer using lipase B from Candida Antarctica (CALB). Results of the study displayed approximately 5.16 % loss in PBAT mass after 2 days which significantly increased to approximately 15.7 % at the end of the experiment (12 days) as compared to blank. The pH of the degradation solution also displayed significant reduction and reached the minimum value of 6.85 at the end of the experiment. The structure and morphology of PBAT after degradation were characterized by FTIR, XRD, SEM, and TGA. FTIR analysis showed that after degradation many peaks become weaker and the peak at 2950 cm-1 almost disappeared after 12 days. The XRD results indicated that as the degradation time increases the intensity of diffraction peaks slightly increases as compared to the blank PBAT. TGA analysis also confirmed the successful degradation of PBAT with time. SEM micrographs further confirmed that degradation has occurred. Hence, biodegradable polymers can widely be used. The effect of PBAT biodegradation on plant growth was also studied and it was found that PBAT has no toxic effect on the growth of plants. Hence PBAT can be employed in a wide range of applications.Keywords: aliphatic-aromatic co-polyesters, polybutylene adipate terephthalate, lipase (CALB), biodegradation, plant growth
Procedia PDF Downloads 7913016 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System
Authors: A. Chávez, A. Rodríguez, F. Pinzón
Abstract:
Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.Keywords: sludge, landfill, leachate, SBR
Procedia PDF Downloads 272