Search results for: target product profile
266 Study on Electromagnetic Plasma Acceleration Using Rotating Magnetic Field Scheme
Authors: Takeru Furuawa, Kohei Takizawa, Daisuke Kuwahara, Shunjiro Shinohara
Abstract:
In the field of a space propulsion, an electric propulsion system has been developed because its fuel efficiency is much higher than a conventional chemical one. However, the practical electric propulsion systems, e.g., an ion engine, have a problem of short lifetime due to a damage of generation and acceleration electrodes of the plasma. A helicon plasma thruster is proposed as a long-lifetime electric thruster which has non-direct contact electrodes. In this system, both generation and acceleration methods of a dense plasma are executed by antennas from the outside of a discharge tube. Development of the helicon plasma thruster has been conducting under the Helicon Electrodeless Advanced Thruster (HEAT) project. Our helicon plasma thruster has two important processes. First, we generate a dense source plasma using a helicon wave with an excitation frequency between an ion and an electron cyclotron frequencies, fci and fce, respectively, applied from the outside of a discharge using a radio frequency (RF) antenna. The helicon plasma source can provide a high-density (~1019 m-3), a high-ionization ratio (up to several tens of percent), and a high particle generation efficiency. Second, in order to achieve high thrust and specific impulse, we accelerate the dense plasma by the axial Lorentz force fz using the product of the induced azimuthal current jθ and the static radial magnetic field Br, shown as fz = jθ × Br. The HEAT project has proposed several kinds of electrodeless acceleration schemes, and in our particular case, a Rotating Magnetic Field (RMF) method has been extensively studied. The RMF scheme was originally developed as a concept to maintain the Field Reversed Configuration (FRC) in a magnetically confined fusion research. Here, RMF coils are expected to generate jθ due to a nonlinear effect shown below. First, the rotating magnetic field Bω is generated by two pairs of RMF coils with AC currents, which have a phase difference of 90 degrees between the pairs. Due to the Faraday’s law, an axial electric field is induced. Second, an axial current is generated by the effects of an electron-ion and an electron-neutral collisions through the Ohm’s law. Third, the azimuthal electric field is generated by the nonlinear term, and the retarding torque generated by the collision effects again. Then, azimuthal current jθ is generated as jθ = - nₑ er ∙ 2π fRMF. Finally, the axial Lorentz force fz for plasma acceleration is generated. Here, jθ is proportional to nₑ and frequency of RMF coil current fRMF, when Bω is fully penetrated into the plasma. Our previous study has achieved 19 % increase of ion velocity using the 5 MHz and 50 A of the RMF coil power supply. In this presentation, we will show the improvement of the ion velocity using the lower frequency and higher current supplied by RMF power supply. In conclusion, helicon high-density plasma production and electromagnetic acceleration by the RMF scheme with a concept of electrodeless condition have been successfully executed.Keywords: electric propulsion, electrodeless thruster, helicon plasma, rotating magnetic field
Procedia PDF Downloads 260265 Knowledge, Attitude, and Practices of Nurses on the Pain Assessment and Management in Level 3 Hospitals in Manila
Authors: Florence Roselle Adalin, Misha Louise Delariarte, Fabbette Laire Lagas, Sarah Emanuelle Mejia, Lika Mizukoshi, Irish Paullen Palomeno, Gibrianne Alistaire Ramos, Danica Pauline Ramos, Josefina Tuazon, Jo Leah Flores
Abstract:
Pain, often a missed and undertreated symptom, affects the quality of life of individuals. Nurses are key players in providing effective pain management to decrease morbidity and mortality of patients in pain. Nurses’ knowledge and attitude on pain greatly affect their ability on assessment and management. The Pain Society of the Philippines recognized the inadequacy and inaccessibility of data on the knowledge, skills, and attitude of nurses on pain management in the country. This study may be the first of its kind in the county, giving it the potential to contribute greatly to nursing education and practice through providing valuable baseline data. Objectives: This study aims to describe the level of knowledge and attitude, and current practices of nurses on pain assessment and management; and determine the relationship of nurses’ knowledge and attitude with years of experience, training on pain management and clinical area of practice. Methodology: A survey research design was employed. Four hospitals were selected through purposive sampling. A total of 235 Medical-Surgical Unit and Intensive Care Unit (ICU) nurses participated in the study. The tool used is a combination of demographic survey, Nurses’ Knowledge and Attitude Survey Regarding Pain (NKASRP), Acute Pain Evidence Based Practice Questionnaire (APEBPQ) with self-report questions on non-pharmacologic pain management. The data obtained was analysed using descriptive statistics, two sample T-tests for clinical areas and training; and Pearson product correlation to identify relationship of level of knowledge and attitude with years of experience. Results and Analysis: The mean knowledge and attitude score of the nurses was 47.14%. Majority answered ‘most of the time’ or ‘all the time’ on 84.12% of practice items on pain assessment, implementation of non-pharmacologic interventions, evaluation and documentation. Three of 19 practice items describing morphine and opioid administration in special populations were only done ‘a little of the time’. Most utilized non-pharmacologic interventions were deep breathing exercises (79.66%), massage therapy (27.54%), and ice therapy (26.69%). There was no significant relationship between knowledge scores and years of clinical experience (p = 0.05, r= -0.09). Moreover, there was not enough evidence to show difference in nurses’ knowledge and attitude scores in relation to presence of training (p = 0.41) or areas (Medical-Surgical or ICU) of clinical practice (p = 0.53). Conclusion and Recommendations: Findings of the study showed that the level of knowledge and attitude of nurses on pain assessment and management is suboptimal; and no relationship between nurses’ knowledge and attitude and years of experience. It is recommended that further studies look into the nursing curriculum on pain education, culture-specific pain management protocols and evidence-based practices in the country.Keywords: knowledge and attitude, nurses, pain management, practices on pain management
Procedia PDF Downloads 346264 Processing of Flexible Dielectric Nanocomposites Using Nanocellulose and Recycled Alum Sludge for Wearable Technology Applications
Authors: D. Sun, L. Saw, A. Onyianta, D. O’Rourke, Z. Lu, C. See, C. Wilson, C. Popescu, M. Dorris
Abstract:
With the rapid development of wearable technology (e.g., smartwatch, activity trackers and health monitor devices), flexible dielectric materials with environmental-friendly, low-cost and high-energy efficiency characteristics are in increasing demand. In this work, a flexible dielectric nanocomposite was processed by incorporating two components: cellulose nanofibrils and alum sludge in a polymer matrix. The two components were used in the reinforcement phase as well as for enhancing the dielectric properties; they were processed using waste materials that would otherwise be disposed to landfills. Alum sludge is a by-product of the water treatment process in which aluminum sulfate is prevalently used as the primary coagulant. According to the data from a project partner-Scottish Water: there are approximately 10k tons of alum sludge generated as a waste from the water treatment work to be landfilled every year in Scotland. The industry has been facing escalating financial and environmental pressure to develop more sustainable strategies to deal with alum sludge wastes. In the available literature, some work on reusing alum sludge has been reported (e.g., aluminum recovery or agriculture and land reclamation). However, little work can be found in applying it to processing energy materials (e.g., dielectrics) for enhanced energy density and efficiency. The alum sludge was collected directly from a water treatment plant of Scottish Water and heat-treated and refined before being used in preparing composites. Cellulose nanofibrils were derived from water hyacinth, an invasive aquatic weed that causes significant ecological issues in tropical regions. The harvested water hyacinth was dried and processed using a cost-effective method, including a chemical extraction followed by a homogenization process in order to extract cellulose nanofibrils. Biodegradable elastomer polydimethylsiloxane (PDMS) was used as the polymer matrix and the nanocomposites were processed by casting raw materials in Petri dishes. The processed composites were characterized using various methods, including scanning electron microscopy (SEM), rheological analysis, thermogravimetric and X-ray diffraction analysis. The SEM result showed that cellulose nanofibrils of approximately 20nm in diameter and 100nm in length were obtained and the alum sludge particles were of approximately 200um in diameters. The TGA/DSC analysis result showed that a weight loss of up to 48% can be seen in the raw material of alum sludge and its crystallization process has been started at approximately 800°C. This observation coincides with the XRD result. Other experiments also showed that the composites exhibit comprehensive mechanical and dielectric performances. This work depicts that it is a sustainable practice of reusing such waste materials in preparing flexible, lightweight and miniature dielectric materials for wearable technology applications.Keywords: cellulose, biodegradable, sustainable, alum sludge, nanocomposite, wearable technology, dielectric
Procedia PDF Downloads 83263 Food Safety in Wine: Removal of Ochratoxin a in Contaminated White Wine Using Commercial Fining Agents
Authors: Antònio Inês, Davide Silva, Filipa Carvalho, Luís Filipe-Riberiro, Fernando M. Nunes, Luís Abrunhosa, Fernanda Cosme
Abstract:
The presence of mycotoxins in foodstuff is a matter of concern for food safety. Mycotoxins are toxic secondary metabolites produced by certain molds, being ochratoxin A (OTA) one of the most relevant. Wines can also be contaminated with these toxicants. Several authors have demonstrated the presence of mycotoxins in wine, especially ochratoxin A. Its chemical structure is a dihydro-isocoumarin connected at the 7-carboxy group to a molecule of L-β-phenylalanine via an amide bond. As these toxicants can never be completely removed from the food chain, many countries have defined levels in food in order to attend health concerns. OTA contamination of wines might be a risk to consumer health, thus requiring treatments to achieve acceptable standards for human consumption. The maximum acceptable level of OTA in wines is 2.0 μg/kg according to the Commission regulation No. 1881/2006. Therefore, the aim of this work was to reduce OTA to safer levels using different fining agents, as well as their impact on white wine physicochemical characteristics. To evaluate their efficiency, 11 commercial fining agents (mineral, synthetic, animal and vegetable proteins) were used to get new approaches on OTA removal from white wine. Trials (including a control without addition of a fining agent) were performed in white wine artificially supplemented with OTA (10 µg/L). OTA analyses were performed after wine fining. Wine was centrifuged at 4000 rpm for 10 min and 1 mL of the supernatant was collected and added of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v). Also, the solid fractions obtained after fining, were centrifuged (4000 rpm, 15 min), the resulting supernatant discarded, and the pellet extracted with 1 mL of the above solution and 1 mL of H2O. OTA analysis was performed by HPLC with fluorescence detection. The most effective fining agent in removing OTA (80%) from white wine was a commercial formulation that contains gelatin, bentonite and activated carbon. Removals between 10-30% were obtained with potassium caseinate, yeast cell walls and pea protein. With bentonites, carboxymethylcellulose, polyvinylpolypyrrolidone and chitosan no considerable OTA removal was verified. Following, the effectiveness of seven commercial activated carbons was also evaluated and compared with the commercial formulation that contains gelatin, bentonite and activated carbon. The different activated carbons were applied at the concentration recommended by the manufacturer in order to evaluate their efficiency in reducing OTA levels. Trial and OTA analysis were performed as explained previously. The results showed that in white wine all activated carbons except one reduced 100% of OTA. The commercial formulation that contains gelatin, bentonite and activated carbon reduced only 73% of OTA concentration. These results may provide useful information for winemakers, namely for the selection of the most appropriate oenological product for OTA removal, reducing wine toxicity and simultaneously enhancing food safety and wine quality.Keywords: wine, ota removal, food safety, fining
Procedia PDF Downloads 537262 Educational Audit and Curricular Reforms in the Arabian Context
Authors: Irum Naz
Abstract:
In the Arabian higher education context, linguistic proficiency in the English language is considered crucial for the developmental sustainability, economic growth, and stability of communities and societies. Qatar’s educational reforms package, through the 2030 vision, identifies the acquisition of English at K-12 as an essential survival communication tool for globalization, believing that Qatari students need better preparation to take on the responsibilities of leadership and to participate effectively in the country’s surging economy. The idea of introducing Qatari students to modern curricula benchmarked to high-student-performance curricula in developed countries is one of the components of reformatory design principles of Education for New Era reform project that is mutually consented to and supported by the Office of Shared Services, Communications Office, and Supreme Education Council. In appreciation of the government’s vision, the English Language Centre (ELC) at the Community College of Qatar ran an internal educational audit and conducted evaluative research to understand and appraise the value, impact, and practicality of the existing ELC language development program. This study sought to identify the type of change that could identify and improve the quality of Foundation Program courses and the manners in which second language learners could be assisted to transit smoothly between (ELC) levels. Following the interpretivist paradigm and mixed research method, the data was gathered through a bicyclic research model and a triangular design. The analyses of the data suggested that there was a need for improvement in the ELC program as a whole, and particularly in terms of curriculum, student learning outcomes, and the general learning environment in the department. Key findings suggest that the target program would benefit from significant revisions, which would include narrowing the focus of the courses, providing sets of specific learning objectives, and preventing repetition between levels. Another promising finding was about the assessment tools and process. The data suggested that a set of standardized assessments that more closely suited the programs of study should be devised. It was also recommended that students undergo a more comprehensive placement process to ensure that they begin the program at an appropriate level and get the maximum benefit from their learning experience. Although this ties into the idea of curriculum revamp, it was expected that students could leave the ELC having had exposure to courses in English for specific purposes. The idea of a more reliable exit assessment for students was raised frequently so ELC could regulate itself and ensure optimum learning outcomes. Another important recommendation was the provision of a Student Learning Center for students that would help them to receive personalized tuition, differentiated instruction, and self-driven and self-evaluated learning experience. In addition, an extra study level was recommended to be added to the program to accommodate the different levels of English language proficiency represented among ELC students. The evidence collected in the course of conducting the study suggests that significant change is needed in the structure of the ELC program, specifically about curriculum, the program learning outcomes, and the learning environment in general.Keywords: educational audit, ESL, optimum learning outcomes, Qatar’s educational reforms, self-driven and self-evaluated learning experience, Student Learning Center
Procedia PDF Downloads 183261 Drivers of Satisfaction and Dissatisfaction in Camping Tourism: A Case Study from Croatia
Authors: Darko Prebežac, Josip Mikulić, Maja Šerić, Damir Krešić
Abstract:
Camping tourism is recognized as a growing segment of the broader tourism industry, currently evolving from an inexpensive, temporary sojourn in a rural environment into a highly fragmented niche tourism sector. The trends among public-managed campgrounds seem to be moving away from rustic campgrounds that provide only a tent pad and a fire ring to more developed facilities that offer a range of different amenities, where campers still search for unique experiences that go above the opportunity to experience nature and social interaction. In addition, while camping styles and options changed significantly over the last years, coastal camping in particular became valorized as is it regarded with a heightened sense of nostalgia. Alongside this growing interest in the camping tourism, a demand for quality servicing infrastructure emerged in order to satisfy the wide variety of needs, wants, and expectations of an increasingly demanding traveling public. However, camping activity in general and quality of camping experience and campers’ satisfaction in particular remain an under-researched area of the tourism and consumption behavior literature. In this line, very few studies addressed the issue of quality product/service provision in satisfying nature based tourists and in driving their future behavior with respect to potential re-visitation and recommendation intention. The present study thus aims to investigate the drivers of positive and negative campsite experience using the case of Croatia. Due to the well-preserved nature and indented coastline, camping tourism has a long tradition in Croatia and represents one of the most important and most developed tourism products. During the last decade the number of tourist overnights in Croatian camps has increased by 26% amounting to 16.5 million in 2014. Moreover, according to Eurostat the market share of campsites in the EU is around 14%, indicating that the market share of Croatian campsites is almost double large compared to the EU average. Currently, there are a total of 250 camps in Croatia with approximately 75.8 thousands accommodation units. It is further noteworthy that Croatian camps have higher average occupancy rates and a higher average length of stay as compared to the national average of all types of accommodation. In order to explore the main drivers of positive and negative campsite experiences, this study uses principal components analysis (PCA) and an impact-asymmetry analysis (IAA). Using the PCA, first the main dimensions of the campsite experience are extracted in an exploratory manner. Using the IAA, the extracted factors are investigated for their potentials to create customer delight and/or frustration. The results provide valuable insight to both researchers and practitioners regarding the understanding of campsite satisfaction.Keywords: Camping tourism, campsite, impact-asymmetry analysis, satisfaction
Procedia PDF Downloads 186260 Decision Making on Smart Energy Grid Development for Availability and Security of Supply Achievement Using Reliability Merits
Authors: F. Iberraken, R. Medjoudj, D. Aissani
Abstract:
The development of the smart grids concept is built around two separate definitions, namely: The European one oriented towards sustainable development and the American one oriented towards reliability and security of supply. In this paper, we have investigated reliability merits enabling decision-makers to provide a high quality of service. It is based on system behavior using interruptions and failures modeling and forecasting from one hand and on the contribution of information and communication technologies (ICT) to mitigate catastrophic ones such as blackouts from the other hand. It was found that this concept has been adopted by developing and emerging countries in short and medium terms followed by sustainability concept at long term planning. This work has highlighted the reliability merits such as: Benefits, opportunities, costs and risks considered as consistent units of measuring power customer satisfaction. From the decision making point of view, we have used the analytic hierarchy process (AHP) to achieve customer satisfaction, based on the reliability merits and the contribution of such energy resources. Certainly nowadays, fossil and nuclear ones are dominating energy production but great advances are already made to jump into cleaner ones. It was demonstrated that theses resources are not only environmentally but also economically and socially sustainable. The paper is organized as follows: Section one is devoted to the introduction, where an implicit review of smart grids development is given for the two main concepts (for USA and Europeans countries). The AHP method and the BOCR developments of reliability merits against power customer satisfaction are developed in section two. The benefits where expressed by the high level of availability, maintenance actions applicability and power quality. Opportunities were highlighted by the implementation of ICT in data transfer and processing, the mastering of peak demand control, the decentralization of the production and the power system management in default conditions. Costs were evaluated using cost-benefit analysis, including the investment expenditures in network security, becoming a target to hackers and terrorists, and the profits of operating as decentralized systems, with a reduced energy not supplied, thanks to the availability of storage units issued from renewable resources and to the current power lines (CPL) enabling the power dispatcher to manage optimally the load shedding. For risks, we have razed the adhesion of citizens to contribute financially to the system and to the utility restructuring. What is the degree of their agreement compared to the guarantees proposed by the managers about the information integrity? From technical point of view, have they sufficient information and knowledge to meet a smart home and a smart system? In section three, an application of AHP method is made to achieve power customer satisfaction based on the main energy resources as alternatives, using knowledge issued from a country that has a great advance in energy mutation. Results and discussions are given in section four. It was given us to conclude that the option to a given resource depends on the attitude of the decision maker (prudent, optimistic or pessimistic), and that status quo is neither sustainable nor satisfactory.Keywords: reliability, AHP, renewable energy resources, smart grids
Procedia PDF Downloads 441259 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit
Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar
Abstract:
Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction
Procedia PDF Downloads 95258 Neoliberalism and Environmental Justice: A Critical Examination of Corporate Greenwashing
Authors: Arnav M. Raval
Abstract:
This paper critically examines the neoliberal economic model and its role in enabling corporate greenwashing, a practice where corporations deceptively market themselves as environmentally responsible while continuing harmful environmental practices. Through a rigorous focus on the neoliberal emphasis of free markets, deregulation, and minimal government intervention, this paper explores how these policies have set the stage for corporations to externalize environmental costs and engage in superficial sustainability initiatives. Within this framework, companies often bypass meaningful environmental reform, opting for strategies that enhance their public image without addressing their actual environmental impacts. The paper also draws on the works of critical theorists Theodor Adorno, Max Horkheimer, and Herbert Marcuse, particularly their critiques of capitalist society and its tendency to commodify social values. This paper argues that neoliberal capitalism has commodified environmentalism, transforming genuine ecological responsibility into a marketable product. Through corporate social responsibility initiatives, corporations have created the illusion of sustainability while masking deeper environmental harm. Under neoliberalism, these initiatives often serve as public relations tools rather than genuine commitments to environmental justice and sustainability. This commodification has become particularly dangerous because as it manipulates consumer perceptions and diverts attention away from the structural causes of environmental degradation. The analysis also examines how greenwashing practices have disproportionately affected marginalized communities, particularly in the global South, where environmental costs are often externalized. As these corporations promote their “sustainability” in wealthier markets, these marginalized communities bear the brunt of their pollution, resource depletion, and other forms of environmental degradation. This dynamic underscores the inherent injustice within neoliberal environmental policies, as those most vulnerable to environmental risks are often neglected, as companies reap the benefits of corporate sustainability efforts at their expense. Finally, this paper calls for a fundamental transition away from neoliberal market-driven solutions, which prioritize corporate profit over genuine ecological reform. It advocates for stronger regulatory frameworks, transparent third-party certifications, and a more collective approach to environmental governance. In order to ensure genuine corporate accountability, governments and institutions must move beyond superficial green initiatives and market-based solutions, shifting toward policies that enforce real environmental responsibility and prioritize environmental justice for all communities. Through the critique of the neoliberal system and its commodification of environmentalism, this paper has highlighted the urgent need to rethink how environmental responsibility is defined and enacted in the corporate world. Without systemic change, greenwashing will continue to undermine both ecological sustainability and social justice, leaving the most vulnerable populations to suffer the consequences.Keywords: critical theory, environmental justice, greenwashing, neoliberalism
Procedia PDF Downloads 16257 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy
Authors: Ye Zhang, Chuanjun Liu
Abstract:
In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.Keywords: composite structure, damage tolerance, impact threat, probabilistic
Procedia PDF Downloads 307256 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study
Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod
Abstract:
Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive volume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of developed beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB component module) placed at the end of the electron applicator, was done using previously validated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves resulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV electron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junction of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.Keywords: abutting fields, electron beam, radiation therapy, spoilers
Procedia PDF Downloads 175255 Plasma Levels of Collagen Triple Helix Repeat Containing 1 (CTHRC1) as a Potential Biomarker in Interstitial Lung Disease
Authors: Rijnbout-St.James Willem, Lindner Volkhard, Scholand Mary Beth, Ashton M. Tillett, Di Gennaro Michael Jude, Smith Silvia Enrica
Abstract:
Introduction: Fibrosing lung diseases are characterized by changes in the lung interstitium and are classified based on etiology: 1) environmental/exposure-related, 2) autoimmune-related, 3) sarcoidosis, 4) interstitial pneumonia, and 4) idiopathic. Among interstitial lung diseases (ILD) idiopathic forms, idiopathic pulmonary fibrosis (IPF) is the most severe. Pathogenesis of IPF is characterized by an increased presence of proinflammatory mediators, resulting in alveolar injury, where injury to alveolar epithelium precipitates an increase in collagen deposition, subsequently thickening the alveolar septum and decreasing gas exchange. Identifying biomarkers implicated in the pathogenesis of lung fibrosis is key to developing new therapies and improving the efficacy of existing therapies. The transforming growth factor-beta (TGF-B1), a mediator of tissue repair associated with WNT5A signaling, is partially responsible for fibroblast proliferation in ILD and is the target of Pirfenidone, one of the antifibrotic therapies used for patients with IPF. Canonical TGF-B signaling is mediated by the proteins SMAD 2/3, which are, in turn, indirectly regulated by Collagen Triple Helix Repeat Containing 1 (CTHRC1). In this study, we tested the following hypotheses: 1) CTHRC1 is more elevated in the ILD cohort compared to unaffected controls, and 2) CTHRC1 is differently expressed among ILD types. Material and Methods: CTHRC1 levels were measured by ELISA in 171 plasma samples from the deidentified University of Utah ILD cohort. Data represent a cohort of 131 ILD-affected participants and 40 unaffected controls. CTHRC1 samples were categorized by a pulmonologist based on affectation status and disease subtypes: IPF (n = 45), sarcoidosis (4), nonspecific interstitial pneumonia (16), hypersensitivity pneumonitis (n = 7), interstitial pneumonia (n=13), autoimmune (n = 15), other ILD - a category that includes undifferentiated ILD diagnoses (n = 31), and unaffected controls (n = 40). We conducted a single-factor ANOVA of plasma CTHRC1 levels to test whether CTHRC1 variance among affected and non-affected participants is statistically significantly different. In-silico analysis was performed with Ingenuity Pathway Analysis® to characterize the role of CTHRC1 in the pathway of lung fibrosis. Results: Statistical analyses of CTHRC1 in plasma samples indicate that the average CTHRC1 level is significantly higher in ILD-affected participants than controls, with the autoimmune ILD being higher than other ILD types, thus supporting our hypotheses. In-silico analyses show that CTHRC1 indirectly activates and phosphorylates SMAD3, which in turn cross-regulates TGF-B1. CTHRC1 also may regulate the expression and transcription of TGFB-1 via WNT5A and its regulatory relationship with CTNNB1. Conclusion: In-silico pathway analyses demonstrate that CTHRC1 may be an important biomarker in ILD. Analysis of plasma samples indicates that CTHRC1 expression is positively associated with ILD affectation, with autoimmune ILD having the highest average CTHRC1 values. While characterizing CTHRC1 levels in plasma can help to differentiate among ILD types and predict response to Pirfenidone, the extent to which plasma CTHRC1 level is a function of ILD severity or chronicity is unknown.Keywords: interstitial lung disease, CTHRC1, idiopathic pulmonary fibrosis, pathway analyses
Procedia PDF Downloads 190254 The Effectiveness of Using Dramatic Conventions as the Teaching Strategy on Self-Efficacy for Children With Autism Spectrum Disorder
Authors: Tso Sheng-Yang, Wang Tien-Ni
Abstract:
Introduction and Purpose: Previous researchers have documented children with ASD (Autism Spectrum Disorders) prefer to escaping internal privates and external privates when they face tough conditions they can’t control or they don’t like.Especially, when children with ASD need to learn challenging tasks, such us Chinese language, their inappropriate behaviors will occur apparently. Recently, researchers apply positive behavior support strategies for children with ASD to enhance their self-efficacy and therefore to reduce their adverse behaviors. Thus, the purpose of this research was to design a series of lecture based on art therapy and to evaluate its effectiveness on the child’s self-efficacy. Method: This research was the single-case design study that recruited a high school boy with ASD. Whole research can be separated into three conditions. First, baseline condition, before the class started and ended, the researcher collected participant’s competencies of self-efficacy every session. In intervention condition, the research used dramatic conventions to teach the child in Chinese language twice a week.When the data was stable across three documents, the period entered to the maintenance condition. In maintenance condition, the researcher only collected the score of self-efficacynot to do other interventions five times a month to represent the effectiveness of maintenance.The time and frequency of data collection among three conditions are identical. Concerning art therapy, the common approach, e.g., music, drama, or painting is to use art medium as independent variable. Due to visual cues of art medium, the ASD can be easily to gain joint attention with teachers. Besides, the ASD have difficulties in understanding abstract objectives Thus, using the drama convention is helpful for the ASD to construct the environment and understand the context of Classical Chinese. By real operation, it can improve the ASD to understand the context and construct prior knowledge. Result: Bassd on the 10-points Likert scale and research, we product following results. (a) In baseline condition, the average score of self-efficacyis 1.12 points, rangedfrom 1 to 2 points, and the level change is 0 point. (b)In intervention condition, the average score of self-efficacy is 7.66 points rangedfrom 7 to 9 points, and the level change is 1 point. (c)In maintenance condition, the average score of self-efficacy is 6.66 points rangedfrom 6 to 7 points, and the level change is 1 point. Concerning immediacy of change, between baseline and intervention conditions, the difference is 5 points. No overlaps were found between these two conditions. Conclusion: According to the result, we find that it is effective that using dramatic conventions a s teaching strategies to teach children with ASD. The result presents the score of self-efficacyimmediately enhances when the dramatic conventions commences. Thus, we suggest the teacher can use this approach and adjust, based on the student’s trait, to teach the ASD on difficult task.Keywords: dramatic conventions, autism spectrum disorder, slef-efficacy, teaching strategy
Procedia PDF Downloads 82253 The Theotokos of the Messina Missal as a Byzantine Icon in Norman Sicily: A Study on Patronage and Devotion
Authors: Jesus Rodriguez Viejo
Abstract:
The aim of this paper is to study cross-cultural interactions between the West and Byzantium, in the fields of art and religion, by analyzing the decoration of one luxury manuscript. The Spanish National Library is home to one of the most extraordinary examples of illuminated manuscript production of Norman Sicily – the Messina Missal. Dating from the late twelfth century, this liturgical book was the result of the intense activity of artistic patronage of an Englishman, Richard Palmer. Appointed bishop of the Sicilian city in the second half of the century, Palmer set a painting workshop attached to his cathedral. The illuminated manuscripts produced there combine a clear Byzantine iconographic language with a myriad of elements imported from France, such as a large number of decorated initials. The most remarkable depiction contained in the Missal is that of the Theotokos (fol. 80r). Its appearance immediately recalls portative Byzantine icons of the Mother of God in South Italy and Byzantium and implies the intervention of an artist familiar with icon painting. The richness of this image is a clear proof of the prestige that Byzantine art enjoyed in the island after the Norman takeover. The production of the school of Messina under Richard Palmer could be considered a counterpart in the field of manuscript illumination of the court art of the Sicilian kings in Palermo and the impressive commissions for the cathedrals of Monreale and Cefalù. However, the ethnic composition of Palmer’s workshop has never been analyzed and therefore, we intend to shed light on the permanent presence of Greek-speaking artists in Norman Messina. The east of the island was the last stronghold of the Greeks and soon after the Norman conquest, the previous exchanges between the cities of this territory and Byzantium restarted again, mainly by way of trade. Palmer was not a Norman statesman, but a churchman and his love for religion and culture prevailed over the wars and struggles for power of the Sicilian kingdom in the central Mediterranean. On the other hand, the representation of the Theotokos can prove that Eastern devotional approaches to images were still common in the east of the island more than a century after the collapse of Byzantine rule. Local Norman lords repeatedly founded churches devoted to Greek saints and medieval Greek-speaking authors were widely copied in Sicilian scriptoria. The Madrid Missal and its Theotokos are doubtless the product of Western initiative but in a land culturally dominated by Byzantium. Westerners, such as Palmer and his circle, could have been immersed in this Hellenophile culture and therefore, naturally predisposed to perform prayers and rituals, in both public and private contexts, linked to ideas and practices of Greek origin, such as the concept of icon.Keywords: history of art, byzantine art, manuscripts, norman sicily, messina, patronage, devotion, iconography
Procedia PDF Downloads 348252 The Pro-Reparative Effect of Vasoactive Intestinal Peptide in Chronic Inflammatory Osteolytic Periapical Lesions
Authors: Michelle C. S. Azevedo, Priscila M. Colavite, Carolina F. Francisconi, Ana P. Trombone, Gustavo P. Garlet
Abstract:
VIP (vasoactive intestinal peptide) know as a potential protective factor in the view of its marked immunosuppressive properties. In this work, we investigated a possible association of VIP with the clinical status of experimental periapical granulomas and the association with expression markers in the lesions potentially associated with periapical lesions pathogenesis. C57BL/6WT mice were treated or not with recombinant VIP. Animals with active/progressive (N=40), inactive/stable (N=70) periapical granulomas and controls (N=50) were anesthetized and the right mandibular first molar was surgically opened, allowing exposure of dental pulp. Endodontic pathogenic bacterial strains were inoculated: Porphyromonas gingivalis, Prevotella nigrescens, Actinomyces viscosus, and Fusobacterium nucleatum subsp. polymorphum. The cavity was not sealed after bacterial inoculation. During lesion development, animals were treated or not with recombinant VIP 3 days post infection. Animals were killed after 3, 7, 14, and 21 days of infection and the jaws were dissected. The extraction of total RNA from periodontal tissues was performed and the integrity of samples was checked. qPCR reaction using TaqMan chemistry with inventoried primers were performed in ViiA7 equipment. The results, depicted as the relative levels of gene expression, were calculated in reference to GAPDH and β-actin expression. Periodontal tissues from upper molars were vested and incubated supplemented RPMI, followed by processing with 0.05% DNase. Cell viability and couting were determined by Neubauer chamber analysis. For flow cytometry analysis, after cell counting the cells were stained with the optimal dilution of each antibody; (PE)-conjugated and (FITC)-conjugated antibodies against CD4, CD25, FOXP3, IL-4, IL-17 and IFN-γ antibodies, as well their respective isotype controls. Cells were analyzed by FACScan and CellQuest software. Results are presented as the number of cells in the periodontal tissues or the number of positive cells for each marker in the CD4+FOXp3+, CD4+IL-4+, CD4+IFNg+ and CD4+IL-17+ subpopulations. The levels mRNA were measured by qPCR. The VIP expression was predominated in inactive lesions, as well part of the clusters of cytokine/Th markers identified as protective factors and a negative correlation between VIP expression and lesion evolution was observed. A quantitative analysis of IL1β, IL17, TNF, IFN, MMP2, RANKL, OPG, IL10, TGFβ, CTLA4, COL5A1, CTGF, CXCL11, FGF7, ITGA4, ITGA5, SERP1 and VTN expression was measured in experimental periapical lesions treated with VIP 7 and 14 days after lesion induction and healthy animals. After 7 days, all targets presented a significate increase in comparison to untreated animals. About migration kinetics, profile of chemokine receptors expression of TCD4+ subsets and phenotypic analysis of Tregs, Th1, Th2 and Th17 cells during the course of experimental periodontal disease evaluated by flow cytometry and depicted as the number of positive cells for each marker. CD4+IFNg+ and CD4+FOXp3+ cells migration were significate increased 7 days post VIP treatment. CD4+IL17+ cells migration were significate increased 7 and 14 days post VIP treatment, CD4+IL4+ cells migration were significate increased 14 and 21 days post VIP treatment compared to the control group. In conclusion, our experimental data support VIP involvement in determining the inactivity of periapical lesions. Financial support: FAPESP #2015/25618-2.Keywords: chronic inflammation, cytokines, osteolytic lesions, VIP (Vasoactive Intestinal Peptide)
Procedia PDF Downloads 191251 Deep Learning Based Polarimetric SAR Images Restoration
Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo ferraioli
Abstract:
In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring . SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.Keywords: SAR image, deep learning, convolutional neural network, deep neural network, SAR polarimetry
Procedia PDF Downloads 89250 Case Study Hyperbaric Oxygen Therapy for Idiopathic Sudden Sensorineural Hearing Loss
Authors: Magdy I. A. Alshourbagi
Abstract:
Background: The National Institute for Deafness and Communication Disorders defines idiopathic sudden sensorineural hearing loss as the idiopathic loss of hearing of at least 30 dB across 3 contiguous frequencies occurring within 3 days.The most common clinical presentation involves an individual experiencing a sudden unilateral hearing loss, tinnitus, a sensation of aural fullness and vertigo. The etiologies and pathologies of ISSNHL remain unclear. Several pathophysiological mechanisms have been described including: vascular occlusion, viral infections, labyrinthine membrane breaks, immune associated disease, abnormal cochlear stress response, trauma, abnormal tissue growth, toxins, ototoxic drugs and cochlear membrane damage. The rationale for the use of hyperbaric oxygen to treat ISSHL is supported by an understanding of the high metabolism and paucity of vascularity to the cochlea. The cochlea and the structures within it require a high oxygen supply. The direct vascular supply, particularly to the organ of Corti, is minimal. Tissue oxygenation to the structures within the cochlea occurs via oxygen diffusion from cochlear capillary networks into the perilymph and the cortilymph. . The perilymph is the primary oxygen source for these intracochlear structures. Unfortunately, perilymph oxygen tension is decreased significantly in patients with ISSHL. To achieve a consistent rise of perilymph oxygen content, the arterial-perilymphatic oxygen concentration difference must be extremely high. This can be restored with hyperbaric oxygen therapy. Subject and Methods: A 37 year old man was presented at the clinic with a five days history of muffled hearing and tinnitus of the right ear. Symptoms were sudden onset, with no associated pain, dizziness or otorrhea and no past history of hearing problems or medical illness. Family history was negative. Physical examination was normal. Otologic examination revealed normal tympanic membranes bilaterally, with no evidence of cerumen or middle ear effusion. Tuning fork examination showed positive Rinne test bilaterally but with lateralization of Weber test to the left side, indicating right ear sensorineural hearing loss. Audiometric analysis confirmed sensorineural hearing loss across all frequencies of about 70- dB in the right ear. Routine lab work were all within normal limits. Clinical diagnosis of idiopathic sudden sensorineural hearing loss of the right ear was made and the patient began a medical treatment (corticosteroid, vasodilator and HBO therapy). The recommended treatment profile consists of 100% O2 at 2.5 atmospheres absolute for 60 minutes daily (six days per week) for 40 treatments .The optimal number of HBOT treatments will vary, depending on the severity and duration of symptomatology and the response to treatment. Results: As HBOT is not yet a standard for idiopathic sudden sensorineural hearing loss, it was introduced to this patient as an adjuvant therapy. The HBOT program was scheduled for 40 sessions, we used a 12-seat multi place chamber for the HBOT, which was started at day seven after the hearing loss onset. After the tenth session of HBOT, improvement of both hearing (by audiogram) and tinnitus was obtained in the affected ear (right). Conclusions: In conclusion, HBOT may be used for idiopathic sudden sensorineural hearing loss as an adjuvant therapy. It may promote oxygenation to the inner ear apparatus and revive hearing ability. Patients who fail to respond to oral and intratympanic steroids may benefit from this treatment. Further investigation is warranted, including animal studies to understand the molecular and histopathological aspects of HBOT and randomized control clinical studies.Keywords: idiopathic sudden sensorineural hearing loss (issnhl), hyperbaric oxygen therapy (hbot), the decibel (db), oxygen (o2)
Procedia PDF Downloads 431249 We Are the Earth That Defends Itself: An Exploration of Discursive Practices of Les Soulèvements De La Terre
Authors: Sophie Del Fa, Loup Ducol
Abstract:
This presentation will focus on the discursive practices of Les Soulèvements de la Terre (hereafter SdlT), a French environmentalist group mobilized against agribusiness. More specifically, we will use, as a case study, the violently repressed demonstration that took place in Sainte-Soline on March 25, 2023 (see after for details). The SdlT embodies the renewal of anti-capitalist and environmentalist struggles that began with Occupy Wall Street in 2009 and in France with the Nuit debout in 2016 and the yellow vests movement from 2019 to 2020. These struggles have three things in common: they are self-organized without official leaders, they rely mainly on occupations to reappropriate public places (squares, roundabouts, natural territories) and they are anti-capitalist. The SdlT was created in 2021 by activists coming from the Zone-to-Defend of Notre-Dame-des-Landes, a victorious 10 yearlong occupation movement against an airport near Nantes, France (from 2009 to 2018). The SdlT is not labeled as a formal association, nor as a constituted group, but as an anti-capitalist network of local struggles at the crossroads of ecology and social issues. Indeed, although they target agro-industry, land grabbing, soil artificialization and ecology without transition, the SdlT considers ecological and social questions as interdependent. Moreover, they have an encompassing vision of ecology that they consider as a concern for the living as a whole by erasing the division between Nature and Culture. Their radicality is structured around three main elements: federative and decentralized dimensions, the rhetoric of living alliances and militant creatives strategies. The objective of this reflexion is to understand how these three dimensions are articulated through the SdlT’s discursive practices. To explore these elements, we take as a case study one specific event: the demonstration against the ‘basins’ held in Sainte-Soline on March 25, 2023, on the construction site of new water storage infrastructure for agricultural irrigation in western France. This event represents a turning point for the SdlT. Indeed, the protest was violently repressed: 5000 grenades were fired by the police, hundreds of people were injured, and one person was still in a coma at the time of writing these lines. Moreover, following Saint-Soline’s events, the Minister of Interior Affairs, Gérald Darmin, threatened to dissolve the SdlT, thus adding fuel to the fire in an already tense social climate (with the ongoing strikes against the pensions reform). We anchor our reflexion on three types of data: 1) our own experiences (inspired by ethnography) of the Sainte-Soline demonstration; 2) the collection of more than 500 000 Tweets with the #SainteSoline hashtag and 3) a press review of texts and articles published after Sainte-Soline’s demonstration. The exploration of these data from a turning point in the history of the SdlT will allow us to analyze how the three dimensions highlighted earlier (federative and decentralized dimensions, rhetoric of living alliances and creatives militant strategies) are materialized through the discursive practices surrounding the Sainte-Soline event. This will allow us to shed light on how a new contemporary movement implements contemporary environmental struggles.Keywords: discursive practices, Sainte-Soline, Ecology, radical ecology
Procedia PDF Downloads 70248 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog
Authors: Ana Flavia Belchior De Andrade
Abstract:
Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.Keywords: backlog, forensic laboratory, quality management, accreditation
Procedia PDF Downloads 121247 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector
Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini
Abstract:
Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products
Procedia PDF Downloads 150246 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 145245 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 183244 Exploitation Pattern of Atlantic Bonito in West African Waters: Case Study of the Bonito Stock in Senegalese Waters
Authors: Ousmane Sarr
Abstract:
The Senegalese coasts have high productivity of fishery resources due to the frequency of intense up-welling system that occurs along its coast, caused by the maritime trade winds making its waters nutrients rich. Fishing plays a primordial role in Senegal's socioeconomic plans and food security. However, a global diagnosis of the Senegalese maritime fishing sector has highlighted the challenges this sector encounters. Among these concerns, some significant stocks, a priority target for artisanal fishing, need further assessment. If no efforts are made in this direction, most stock will be overexploited or even in decline. It is in this context that this research was initiated. This investigation aimed to apply a multi-modal approach (LBB, Catch-only-based CMSY model and its most recent version (CMSY++); JABBA, and JABBA-Select) to assess the stock of Atlantic bonito, Sarda sarda (Bloch, 1793) in the Senegalese Exclusive Economic Zone (SEEZ). Available catch, effort, and size data from Atlantic bonito over 15 years (2004-2018) were used to calculate the nominal and standardized CPUE, size-frequency distribution, and length at retentions (50 % and 95 % selectivity) of the species. These relevant results were employed as input parameters for stock assessment models mentioned above to define the stock status of this species in this region of the Atlantic Ocean. The LBB model indicated an Atlantic bonito healthy stock status with B/BMSY values ranging from 1.3 to 1.6 and B/B0 values varying from 0.47 to 0.61 of the main scenarios performed (BON_AFG_CL, BON_GN_Length, and BON_PS_Length). The results estimated by LBB are consistent with those obtained by CMSY. The CMSY model results demonstrate that the SEEZ Atlantic bonito stock is in a sound condition in the final year of the main scenarios analyzed (BON, BON-bt, BON-GN-bt, and BON-PS-bt) with sustainable relative stock biomass (B2018/BMSY = 1.13 to 1.3) and fishing pressure levels (F2018/FMSY= 0.52 to 1.43). The B/BMSY and F/FMSY results for the JABBA model ranged between 2.01 to 2.14 and 0.47 to 0.33, respectively. In contrast, The estimated B/BMSY and F/FMSY for JABBA-Select ranged from 1.91 to 1.92 and 0.52 to 0.54. The Kobe plots results of the base case scenarios ranged from 75% to 89% probability in the green area, indicating sustainable fishing pressure and an Atlantic bonito healthy stock size capable of producing high yields close to the MSY. Based on the stock assessment results, this study highlighted scientific advice for temporary management measures. This study suggests an improvement of the selectivity parameters of longlines and purse seines and a temporary prohibition of the use of sleeping nets in the fishery for the Atlantic bonito stock in the SEEZ based on the results of the length-base models. Although these actions are temporary, they can be essential to reduce or avoid intense pressure on the Atlantic bonito stock in the SEEZ. However, it is necessary to establish harvest control rules to provide coherent and solid scientific information that leads to appropriate decision-making for rational and sustainable exploitation of Atlantic bonito in the SEEZ and the Eastern Atlantic Ocean.Keywords: multi-model approach, stock assessment, atlantic bonito, SEEZ
Procedia PDF Downloads 61243 Exploiting the Tumour Microenvironment in Order to Optimise Sonodynamic Therapy for Cancer
Authors: Maryam Mohammad Hadi, Heather Nesbitt, Hamzah Masood, Hashim Ahmed, Mark Emberton, John Callan, Alexander MacRobert, Anthony McHale, Nikolitsa Nomikou
Abstract:
Sonodynamic therapy (SDT) utilises ultrasound in combination with sensitizers, such as porphyrins, for the production of cytotoxic reactive oxygen species (ROS) and the confined ablation of tumours. Ultrasound can be applied locally, and the acoustic waves, at frequencies between 0.5-2 MHz, are transmitted efficiently through tissue. SDT does not require highly toxic agents, and the cytotoxic effect only occurs upon ultrasound exposure at the site of the lesion. Therefore, this approach is not associated with adverse side effects. Further highlighting the benefits of SDT, no cancer cell population has shown resistance to therapy-triggered ROS production or their cytotoxic effects. This is particularly important, given the as yet unresolved issues of radiation and chemo-resistance, to the authors’ best knowledge. Another potential future benefit of this approach – considering its non-thermal mechanism of action – is its possible role as an adjuvant to immunotherapy. Substantial pre-clinical studies have demonstrated the efficacy and targeting capability of this therapeutic approach. However, SDT has yet to be fully characterised and appropriately exploited for the treatment of cancer. In this study, a formulation based on multistimulus-responsive sensitizer-containing nanoparticles that can accumulate in advanced prostate tumours and increase the therapeutic efficacy of SDT has been developed. The formulation is based on a polyglutamate-tyrosine (PGATyr) co-polymer carrying hematoporphyrin. The efficacy of SDT in this study was demonstrated using prostate cancer as the translational exemplar. The formulation was designed to respond to the microenvironment of advanced prostate tumours, such as the overexpression of the proteolytic enzymes, cathepsin-B and prostate-specific membrane antigen (PSMA), that can degrade the nanoparticles, reduce their size, improving both diffusions throughout the tumour mass and cellular uptake. The therapeutic modality was initially tested in vitro using LNCaP and PC3 cells as target cell lines. The SDT efficacy was also examined in vivo, using male SCID mice bearing LNCaP subcutaneous tumours. We have demonstrated that the PGATyr co-polymer is digested by cathepsin B and that digestion of the formulation by cathepsin-B, at tumour-mimicking conditions (acidic pH), leads to decreased nanoparticle size and subsequent increased cellular uptake. Sonodynamic treatment, at both normoxic and hypoxic conditions, demonstrated ultrasound-induced cytotoxic effects only for the nanoparticle-treated prostate cancer cells, while the toxicity of the formulation in the absence of ultrasound was minimal. Our in vivo studies in immunodeficient mice, using the hematoporphyrin-containing PGATyr nanoparticles for SDT, showed a 50% decrease in LNCaP tumour volumes within 24h, following IV administration of a single dose. No adverse effects were recorded, and body weight was stable. The results described in this study clearly demonstrate the promise of SDT to revolutionize cancer treatment. It emphasizes the potential of this therapeutic modality as a fist line treatment or in combination treatment for the elimination or downstaging of difficult to treat cancers, such as prostate, pancreatic, and advanced colorectal cancer.Keywords: sonodynamic therapy, nanoparticles, tumour ablation, ultrasound
Procedia PDF Downloads 137242 Fodder Production and Livestock Rearing in Relation to Climate Change and Possible Adaptation Measures in Manaslu Conservation Area, Nepal
Authors: Bhojan Dhakal, Naba Raj Devkota, Chet Raj Upreti, Maheshwar Sapkota
Abstract:
A study was conducted to find out the production potential, nutrient composition, and the variability of the most commonly available fodder trees along with the varying altitude to help optimize the dry matter requirement during winter lean period. The study was carried out from March to June, 2012 in Lho and Prok Village Development Committee of Manaslu Conservation Area (MCA), located in Gorkha district of Nepal. The other objective of the research was to learn the impact of climate change on livestock production linking it with feed availability. The study was conducted in two parts: social and biological. Accordingly, a households (HHs) survey was conducted to collect primary data from 70 HHs, focusing on the perception of respondents on impacts of climatic variability on the feeding management. The next part consisted of understanding yield potential and nutrient composition of the four most commonly available fodder trees (M. azedirach, M. alba, F. roxburghii, F. nemoralis), within two altitudes range: (1500-2000 masl and 2000-2500 masl) by using a RCB design in 2*4 factorial combination of treatments, each replicated four times. Results revealed that majority of the farmers perceived the change in climatic phenomenon more severely within the past five years. Farmers were using different adaptation technologies such as collection of forage from jungle, reducing unproductive animals, fodder trees utilization, and crop by product feeding at feed scarcity period. Ranking of the different fodder trees on the basis of indigenous knowledge and experiences revealed that F. roxburghii was the best-preferred fodder tree species (index value 0.72) in terms overall preferability whereas M. azedirach had highest growth and productivity (index value 0.77), F. roxburghii had highest adoptability (index value 0.69) and palatability (index value 0.69) as well. Similarly, fresh yield and dry matter yield of the each fodder trees was significant (P < 0.01) between the altitude and within species. Fodder trees yield analysis revealed that the highest dry matter (DM) yield (28 kg/tree) was obtained for F. roxburghii but that remained statistically similar (P > 0.05) to the other treatment. On the other hand, most of the parameters: ether extract (EE), acid detergent lignin (ADL), acid detergent fibre (ADF), cell wall digestibility (CWD), relative digestibility (RD), digestible nutrient (TDN), and Calcium (Ca) among the treatments were highly significant (P < 0.01). This indicates the scope of introducing productive and nutritive fodder trees species even at the high altitude to help reduce fodder scarcity problem during winter. The finding also revealed the scope of promoting all available local fodder trees species as crude protein content of these species were similar.Keywords: fodder trees, yield potential, climate change, nutrient composition
Procedia PDF Downloads 308241 A Critical Analysis of the Current Concept of Healthy Eating and Its Impact on Food Traditions
Authors: Carolina Gheller Miguens
Abstract:
Feeding is, and should be, pleasurable for living beings so they desire to nourish themselves while preserving the continuity of the species. Social rites usually revolve around the table and are closely linked to the cultural traditions of each region and social group. Since the beginning, food has been closely linked with the products each region provides, and, also, related to the respective seasons of production. With the globalization and facilities of modern life we are able to find an ever increasing variety of products at any time of the year on supermarket shelves. These lifestyle changes end up directly influencing food traditions. With the era of uncontrolled obesity caused by the dazzle with the large and varied supply of low-priced to ultra-processed industrial products now in the past, today we are living a time when people are putting aside the pleasure of eating to exclusively eat food dictated by the media as healthy. Recently the medicalization of food in our society has become so present in daily life that almost without realizing we make food choices conditioned to the studies of the properties of these foods. The fact that people are more attentive to their health is interesting. However, when this care becomes an obsessive disorder, which imposes itself on the pleasure of eating and extinguishes traditional customs, it becomes dangerous for our recognition as citizens belonging to a culture and society. This new way of living generates a rupture with the social environment of origin, possibly exposing old traditions to oblivion after two or three generations. Based on these facts, the presented study analyzes these social transformations that occur in our society that triggered the current medicalization of food. In order to clarify what is actually a healthy diet, this research proposes a critical analysis on the subject aiming to understand nutritional rationality and relate how it acts in the medicalization of food. A wide bibliographic review on the subject was carried out followed by an exploratory research in online (especially social) media, a relevant source in this context due to the perceived influence of such media in contemporary eating habits. Finally, this data was crossed, critically analyzing the current situation of the concept of healthy eating and medicalization of food. Throughout this research, it was noticed that people are increasingly seeking information about the nutritional properties of food, but instead of seeking the benefits of products that traditionally eat in their social environment, they incorporate external elements that often bring benefits similar to the food already consumed. This is because the access to information is directed by the media and exalts the exotic, since this arouses more interest of the population in general. Efforts must be made to clarify that traditional products are also healthy foods, rich in history, memory and tradition and cannot be replaced by a standardized diet little concerned with the construction of taste and pleasure, having a relationship with food as if it were a Medicinal product.Keywords: food traditions, food transformations, healthy eating, medicalization of food
Procedia PDF Downloads 327240 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 88239 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 99238 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology
Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert
Abstract:
Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.Keywords: BIPV, building simulation, optimized control strategy, planning tool
Procedia PDF Downloads 108237 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels
Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe
Abstract:
The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa
Procedia PDF Downloads 230