Search results for: spectrum shaping scheme
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3167

Search results for: spectrum shaping scheme

167 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 51
166 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 227
165 L1 Poetry and Moral Tales as a Factor Affecting L2 Acquisition in EFL Settings

Authors: Arif Ahmed Mohammed Al-Ahdal

Abstract:

Poetry, tales, and fables have always been a part of the L1 repertoire and one that takes the learners to another amazing and fascinating world of imagination. The storytelling class and the genre of poems are activities greatly enjoyed by all age groups. The very significant idea behind their inclusion in the language curriculum is to sensitize young minds to a wide range of human emotions that are believed to greatly contribute to building their social resilience, emotional stability, empathy towards fellow creatures, and literacy. Quite certainly, the learning objective at this stage is not language acquisition (though it happens as an automatic process) but getting the young learners to be acquainted with an entire spectrum of what may be called the ‘noble’ abilities of the human race. They enrich their very existence, inspiring them to unearth ‘selves’ that help them as adults and enable them to co-exist fruitfully and symbiotically with their fellow human beings. By extension, ‘higher’ training in these literature genres shows the universality of human emotions, sufferings, aspirations, and hopes. The current study is anchored on the Reader-Response-Theory in literature learning, which suggests that the reader reconstructs work and re-enacts the author's creative role. Reiteratingly, literary works provide clues or verbal symbols in a linguistic system, widely accepted by everyone who shares the language, but everyone reads their own life experiences and situations into them. The significance of words depends on the reader, even if they have a typical relationship. In every reading, there is an interaction between the reader and the text. The process of reading is an experience in which the reader tries to comprehend the literary work, which surpasses its full potential since it provides emotional and intellectual reactions that are not anticipated from the document but cannot be affirmed just by the reader as a part of the text. The idea is that the text forms the basis of a unifying experience. A reinterpretation of the literary text may transform it into a guiding principle to respond to actual experiences and personal memories. The impulses delivered to the reader vary according to poetry or texts; nevertheless, the readers differ considerably even with the same material. Previous studies confirm that poetry is a useful tool for learning a language. This present paper works on these hypotheses and proposes to study the impetus given to L2 learning as a factor of exposure to poetry and meaningful stories in L1. The driving force behind the choice of this topic is the first-hand experience that the researcher had while teaching a literary text to a group of BA students who, as a reaction to the text, initially burst into tears and ultimately turned the class into an interactive session. The study also intends to compare the performance of male and female students post intervention using pre and post-tests, apart from undertaking a detailed inquiry via interviews with college learners of English to understand how L1 literature plays a great role in the acquisition of L2.

Keywords: SLA, literary text, poetry, tales, affective factors

Procedia PDF Downloads 57
164 Influence Study of the Molar Ratio between Solvent and Initiator on the Reaction Rate of Polyether Polyols Synthesis

Authors: María José Carrero, Ana M. Borreguero, Juan F. Rodríguez, María M. Velencoso, Ángel Serrano, María Jesús Ramos

Abstract:

Flame-retardants are incorporated in different materials in order to reduce the risk of fire, either by providing increased resistance to ignition, or by acting to slow down combustion and thereby delay the spread of flames. In this work, polyether polyols with fire retardant properties were synthesized due to their wide application in the polyurethanes formulation. The combustion of polyurethanes is primarily dependent on the thermal properties of the polymer, the presence of impurities and formulation residue in the polymer as well as the supply of oxygen. There are many types of flame retardants, most of them are phosphorous compounds of different nature and functionality. The addition of these compounds is the most common method for the incorporation of flame retardant properties. The employment of glycerol phosphate sodium salt as initiator for the polyol synthesis allows obtaining polyols with phosphate groups in their structure. However, some of the critical points of the use of glycerol phosphate salt are: the lower reactivity of the salt and the necessity of a solvent (dimethyl sulfoxide, DMSO). Thus, the main aim in the present work was to determine the amount of the solvent needed to get a good solubility of the initiator salt. Although the anionic polymerization mechanism of polyether formation is well known, it seems convenient to clarify the role that DMSO plays at the starting point of the polymerization process. Regarding the fact that the catalyst deprotonizes the hydroxyl groups of the initiator and as a result of this, two water molecules and glycerol phosphate alkoxide are formed. This alkoxide, together with DMSO, has to form a homogeneous mixture where the initiator (solid) and the propylene oxide (PO) are soluble enough to mutually interact. The addition rate of PO increased when the solvent/initiator ratios studied were increased, observing that it also made the initiation step shorter. Furthermore, the molecular weight of the polyol decreased when higher solvent/initiator ratios were used, what revealed that more amount of salt was activated, initiating more chains of lower length but allowing to react more phosphate molecules and to increase the percentage of phosphorous in the final polyol. However, the final phosphorous content was lower than the theoretical one because only a percentage of salt was activated. On the other hand, glycerol phosphate disodium salt was still partially insoluble in DMSO studied proportions, thus, the recovery and reuse of this part of the salt for the synthesis of new flame retardant polyols was evaluated. In the recovered salt case, the rate of addition of PO remained the same than in the commercial salt but a shorter induction period was observed, this is because the recovered salt presents a higher amount of deprotonated hydroxyl groups. Besides, according to molecular weight, polydispersity index, FT-IR spectrum and thermal stability, there were no differences between both synthesized polyols. Thus, it is possible to use the recovered glycerol phosphate disodium salt in the same way that the commercial one.

Keywords: DMSO, fire retardants, glycerol phosphate disodium salt, recovered initiator, solvent

Procedia PDF Downloads 260
163 Nanocomplexes on the Base of Triterpene Saponins Isolated from Glycyrrhiza glabra and Saponaria officinalis Plants as an Efficient Adjuvants for Influenza Vaccine Use

Authors: Vladimir Berezin, Andrey Bogoyavlenskiy, Pavel Alexyuk, Madina Alexyuk, Aizhan Turmagambetova, Irina Zaitseva, Nadezhda Sokolova, Elmira Omirtaeva

Abstract:

Introduction: Triterpene saponins of plant origin are one of the most promising candidates for elaboration of novel adjuvants. Due to the combination of immunostimulating activity and the capacity interact with amphipathic molecules with formation of highly immunogenic nanocomplexes, triterpene saponins could serve as a good adjuvant/delivery system for vaccine use. In the research presented adjuvants on the base of nanocomplexes contained triterpene saponins isolated from Glycyrrhiza glabra and Saponaria officinalis plants indigenous to Kazakhstan were elaborated for influenza vaccine use. Methods: Purified triterpene saponins 'Glabilox' and 'SO1' with low toxicity and high immunostimulatory activity were isolated from plants Glycyrrhiza glabra L. and Saponaria officinalis L. by high-performance liquid chromatography (HPLC) and identified using electrospray ionization mass spectrometry (ESI-MS). Influenza virus A/St-Petersburg/5/09 (H1N1) propagated in 9-days old chicken embryos was concentrated and purified by centrifugation in sucrose gradient. Nanocomplexes contained lipids, and triterpene saponins Glabilox or SO1 were prepared by dialysis technique. Immunostimulating activity of experimental vaccine preparations was studied in vaccination/challenge experiments in mice. Results: Humoral and cellular immune responses and protection against influenza virus infection were examined after single subcutaneous and intranasal immunization. Mice were immunized subunit influenza vaccine (HA+NA) or whole virus inactivated influenza vaccine in doses 3.0/5.0/10.0 µg antigen/animal mixed with adjuvant in dose 15.0 µg/animal. Sera were taken 14-21 days following single immunization and mice challenged by A/St-Petersburg/5/09 influenza virus in dose 100 EID₅₀. Study of experimental influenza vaccine preparations in animal immunization experiments has shown that subcutaneous and intranasal immunization with subunit influenza vaccine mixed with nanocomplexes contained Glabilox or SO1 saponins stimulated high levels of humoral immune response (IgM, IgA, IgG1, IgG2a, and IgG2b antibody) and cellular immune response (IL-2, IL-4, IL-10, and IFN-γ cytokines) and resulted 80-90% protection against lethal influenza infection. Also, single intranasal and single subcutaneous immunization with whole virus inactivated influenza vaccine mixed with nanoparticulated adjuvants stimulated high levels of humoral and cellular immune responses and provided 100% protection against lethal influenza infection. Conclusion: The results of study have shown that nanocomplexes contained purified triterpene saponins Glabilox and SO1 isolated from plants indigenous to Kazakhstan can stimulate a broad spectrum of humoral and cellular immune responses and induce protection against lethal influenza infection. Both elaborated adjuvants are promising for incorporation to influenza vaccine intended for subcutaneous and intranasal routes of immunization.

Keywords: influenza vaccine, adjuvants, triterpene saponins, immunostimulating activity

Procedia PDF Downloads 115
162 Source-Detector Trajectory Optimization for Target-Based C-Arm Cone Beam Computed Tomography

Authors: S. Hatamikia, A. Biguri, H. Furtado, G. Kronreif, J. Kettenbach, W. Birkfellner

Abstract:

Nowadays, three dimensional Cone Beam CT (CBCT) has turned into a widespread clinical routine imaging modality for interventional radiology. In conventional CBCT, a circular sourcedetector trajectory is used to acquire a high number of 2D projections in order to reconstruct a 3D volume. However, the accumulated radiation dose due to the repetitive use of CBCT needed for the intraoperative procedure as well as daily pretreatment patient alignment for radiotherapy has become a concern. It is of great importance for both health care providers and patients to decrease the amount of radiation dose required for these interventional images. Thus, it is desirable to find some optimized source-detector trajectories with the reduced number of projections which could therefore lead to dose reduction. In this study we investigate some source-detector trajectories with the optimal arbitrary orientation in the way to maximize performance of the reconstructed image at particular regions of interest. To achieve this approach, we developed a box phantom consisting several small target polytetrafluoroethylene spheres at regular distances through the entire phantom. Each of these spheres serves as a target inside a particular region of interest. We use the 3D Point Spread Function (PSF) as a measure to evaluate the performance of the reconstructed image. We measured the spatial variance in terms of Full-Width-Half-Maximum (FWHM) of the local PSFs each related to a particular target. The lower value of FWHM shows the better spatial resolution of reconstruction results at the target area. One important feature of interventional radiology is that we have very well-known imaging targets as a prior knowledge of patient anatomy (e.g. preoperative CT) is usually available for interventional imaging. Therefore, we use a CT scan from the box phantom as the prior knowledge and consider that as the digital phantom in our simulations to find the optimal trajectory for a specific target. Based on the simulation phase we have the optimal trajectory which can be then applied on the device in real situation. We consider a Philips Allura FD20 Xper C-arm geometry to perform the simulations and real data acquisition. Our experimental results based on both simulation and real data show our proposed optimization scheme has the capacity to find optimized trajectories with minimal number of projections in order to localize the targets. Our results show the proposed optimized trajectories are able to localize the targets as good as a standard circular trajectory while using just 1/3 number of projections. Conclusion: We demonstrate that applying a minimal dedicated set of projections with optimized orientations is sufficient to localize targets, may minimize radiation.

Keywords: CBCT, C-arm, reconstruction, trajectory optimization

Procedia PDF Downloads 119
161 Management of Hypoglycemia in Von Gierke’s Disease

Authors: Makda Aamir, Sood Aayushi, Syed Omar, Nihan Khuld, Iskander Peter, Ijaz Naeem, Sharma Nishant

Abstract:

Introduction:Glycogen Storage Disease Type-1 (GSD-1) is a rare phenomenon primarily affecting the liver and kidney. Excessive accumulation of glycogen and fat in liver, kidney, and intestinal mucosa is noted in patients with deficiency of Glucose-6-phosphatase deficiency. Patients with GSD-1 have a wide spectrum of symptoms, including hepatomegaly, hypoglycemia, lactic acidemia, hyperlipidemia, hyperuricemia, and growth retardation. Age of onset, rate of disease progression and its severity is variable in this disease.Case:An 18-year-old male with GSD-1a, Von Gierke’s disease, hyperuricemia, and hypertension presented to the hospital with nausea and vomiting. The patient followed an hourly cornstarch regimen during the day and overnight through infusion via a PEG tube. The complaints started at work, where he was unable to tolerate oral cornstarch. He washemodynamically stable on arrival. ABG showed pH 7.372, PaCO2 30.3, and PaO2 92.2. WBC 16.80, K+ 5.8, HCO3 13, BUN 28, Cr 2.2, Glucose 60, AST 115, ALT 128, Cholesterol 352, Triglycerides >1000, Uric Acid 10.6, Lactic Acid 11.8 which trended down to 8.0. CT abdomen showed hepatomegaly and fatty infiltration with the PEG tube in place.He was admitted to the ICU and started on D5NS for hypoglycemia and lactic acidosis. Per request by the patient’s pediatrician, he was transitioned to IV D10/0.45NS at 110mL/Hr to maintain blood glucose above 75 mg/L. Frequent accuchecks were done till he could tolerate his dietary regimen with cornstarch. Lactic acid downtrend to 2.9, and accuchecks ranged between 100-110. Cr improved to 1.3, and his home medications (Allopurinol and Lisinopril) were resumed. He was discharged in stable condition with plans for further genetic therapy work up.Discussion:Mainstay therapy for Von Gierke’s Disease is the prevention of metabolic derangements for which dietary and lifestyle changes are recommended. A low fructose and sucrose diet is recommended by limiting the intake of galactose and lactose to one serving per day. Hypoglycemia treatment in such patients is two-fold, utilizing both quick and stable release sources. Cornstarch has been one such therapy since the 1980s; its slow digestion provides a steady release of glucose over a longer period of time as compared with other sources of carbohydrates. Dosing guidelines vary from age to age and person to person, but it is highly recommended to check BG levels frequently to maintain a BG > 70 mg/dL. Associated high levels of triglycerides and cholesterol can be treated with statins, fibrates, etc. Conclusion:The management of hypoglycemia in GSD 1 disease presents various obstacles which could prove to be fatal. Due to the deficiency of G6P, treatment with a specialized hypoglycemic regimen is warranted. A D10 ½ NS infusion can be used to maintain blood sugar levels as well as correct metabolic or lactate imbalances. Infusion should be gradually weaned off after the patient can tolerate oral feeds as this can help prevent the risk of hypoglycemia and other derangements. Further research is needed in regards to these patients for more sustainable regimens.

Keywords: von gierke, glycogen storage disease, hypoglycemia, genetic disease

Procedia PDF Downloads 86
160 Symptomatic Strategies: Artistic Approaches Resembling Psychiatric Symptoms

Authors: B. Körner

Abstract:

This paper compares deviant behaviour in two different readings: 1) as symptomatic for so-called ‘mental illness’ and 2) as part of artistic creation. It analyses works of performance art in the respective frames of psychiatric evaluation and performance studies. This speculative comparison offers an alternative interpretation of mad behaviour beyond pathologisation. It questions the distinction of psychiatric diagnosis, which can contribute to reducing the stigmatisation of mad people. The stigma associated with madness entails exclusion, prejudice, and systemic oppression. Symptoms of psychiatric diagnoses can be considered as behaviour exceptional to the psychological norm. This deviant behaviour constitutes an outsider role which is also defining for the societal role of ‘the artist’, whose transgressions of the norm are expected and celebrated. The research proposes the term ‘artistic exceptionalism’ for this phenomenon. In this study, a set of performance artworks are analysed within the frame of an art-theoretical interpretation and as if they were the basis of a psychiatric assessment. This critical comparison combines the perspective on ‘mental illness’ of mad studies with methods of interpretation used in performance studies. The research employs auto theory and artistic research; interweaving lived experience with scientific theory building through the double role of the author as both performance artist and survivor researcher. It is a distinctly personal and mad thought experiment. The research proposes three major categories of artistic strategies approaching madness: (a) confronting madness (processing and publicly addressing one's own experiences with mental distress through artistic creation), (b) creating critical conditions (conscious or unconscious, voluntary or involuntary creation of crisis situations in order to create an intense experience for a work of art), and (c) symptomatic strategies. This paper focuses on the last of the three categories: symptomatic strategies. These can be described as artistic methods with parallels to forms of coping with and/or symptoms of ‘mental disorders.’ These include, for example feverish activity, a bleak worldview, additional perceptions, an urge for order, and the intensification of emotional experience. The proposed categories are to be understood as a spectrum of approaches that are not mutually exclusive. This research does not aim to diagnose or pathologise artists or their strategies; disease value is neither sought nor assumed. Neither does it intend to belittle psychological suffering, implying that it cannot be so bad if it is productive for artists. It excludes certain approaches that romanticise and/or exoticise mental distress, for example, artistic portrayal of people in mental crisis (e.g., documentary-observational or exoticising depictions) or the deliberate and exaggerated imitation of their forms of expression and behaviour as ‘authentic’ (e.g., Art Brut). These are based on the othering of the Mad and thus perpetuate the social stigma to which they are subjected. By noting that the same deviant behaviour can be interpreted as the opposite in different contexts, this research offers an alternative approach to madness beyond the confines of psychiatry. It challenges the distinction of psychiatric diagnosis and exposes its social constructedness. Hereby, it aims to empower survivors and reduce the stigmatisation of madness.

Keywords: artistic research, mad studies, mental health, performance art, psychiatric stigma

Procedia PDF Downloads 59
159 Cultural Heritage, Urban Planning and the Smart City in Indian Context

Authors: Paritosh Goel

Abstract:

The conservation of historic buildings and historic Centre’s over recent years has become fully encompassed in the planning of built-up areas and their management following climate changes. The approach of the world of restoration, in the Indian context on integrated urban regeneration and its strategic potential for a smarter, more sustainable and socially inclusive urban development introduces, for urban transformations in general (historical centers and otherwise), the theme of sustainability. From this viewpoint, it envisages, as a primary objective, a real “green, ecological or environmental” requalification of the city through interventions within the main categories of sustainability: mobility, energy efficiency, use of sources of renewable energy, urban metabolism (waste, water, territory, etc.) and natural environment. With this the concept of a “resilient city” is also introduced, which can adapt through progressive transformations to situations of change which may not be predictable, behavior that the historical city has always been able to express. Urban planning on the other hand, has increasingly focused on analyses oriented towards the taxonomic description of social/economic and perceptive parameters. It is connected with human behavior, mobility and the characterization of the consumption of resources, in terms of quantity even before quality to inform the city design process, which for ancient fabrics, and mainly affects the public space also in its social dimension. An exact definition of the term “smart city” is still essentially elusive, since we can attribute three dimensions to the term: a) That of a virtual city, evolved based on digital networks and web networks b) That of a physical construction determined by urban planning based on infrastructural innovation, which in the case of historic Centre’s implies regeneration that stimulates and sometimes changes the existing fabric; c) That of a political and social/economic project guided by a dynamic process that provides new behavior and requirements of the city communities that orients the future planning of cities also through participation in their management. This paper is a preliminary research into the connections between these three dimensions applied to the specific case of the fabric of ancient cities with the aim of obtaining a scientific theory and methodology to apply to the regeneration of Indian historical Centre’s. The Smart city scheme if contextualize with heritage of the city it can be an initiative which intends to provide a transdisciplinary approach between various research networks (natural sciences, socio-economics sciences and humanities, technological disciplines, digital infrastructures) which are united in order to improve the design, livability and understanding of urban environment and high historical/cultural performance levels.

Keywords: historical cities regeneration, sustainable restoration, urban planning, smart cities, cultural heritage development strategies

Procedia PDF Downloads 262
158 Budgetary Performance Model for Managing Pavement Maintenance

Authors: Vivek Hokam, Vishrut Landge

Abstract:

An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.

Keywords: budget, maintenance, deterioration, priority

Procedia PDF Downloads 178
157 Generating Ideas to Improve Road Intersections Using Design with Intent Approach

Authors: Omar Faruqe Hamim, M. Shamsul Hoque, Rich C. McIlroy, Katherine L. Plant, Neville A. Stanton

Abstract:

Road safety has become an alarming issue, especially in low-middle income developing countries. The traditional approaches lack the out of the box thinking, making engineers confined to applying usual techniques in making roads safer. A socio-technical approach has recently been introduced in improving road intersections through designing with intent. This Design With Intent (DWI) approach aims to give practitioners a more nuanced approach to design and behavior, working with people, people’s understanding, and the complexities of everyday human experience. It's a collection of design patterns —and a design and research approach— for exploring the interactions between design and people’s behavior across products, services, and environments, both digital and physical. Through this approach, it can be seen that how designing with people in behavior change can be applied to social and environmental problems, as well as commercially. It has a total of 101 cards across eight different lenses, such as architectural, error-proofing, interaction, ludic, perceptual, cognitive, Machiavellian, and security lens each having its own distinct characteristics of extracting ideas from the participant of this approach. For this research purpose, a three-legged accident blackspot intersection of a national highway has been chosen to perform the DWI workshop. Participants from varying fields such as civil engineering, naval architecture and marine engineering, urban and regional planning, and sociology actively participated for a day long workshop. While going through the workshops, the participants were given a preamble of the accident scenario and a brief overview of DWI approach. Design cards of varying lenses were distributed among 10 participants and given an hour and a half for brainstorming and generating ideas to improve the safety of the selected intersection. After the brainstorming session, the participants spontaneously went through roundtable discussions regarding the ideas they have come up with. According to consensus of the forum, ideas were accepted or rejected. These generated ideas were then synthesized and agglomerated to bring about an improvement scheme for the intersection selected in our study. To summarize the improvement ideas from DWI approach, color coding of traffic lanes for separate vehicles, channelizing the existing bare intersection, providing advance warning traffic signs, cautionary signs and educational signs motivating road users to drive safe, using textured surfaces at approach with rumble strips before the approach of intersection were the most significant one. The motive of this approach is to bring about new ideas from the road users and not just depend on traditional schemes to increase the efficiency, safety of roads as well and to ensure the compliance of road users since these features are being generated from the minds of users themselves.

Keywords: design with intent, road safety, human experience, behavior

Procedia PDF Downloads 118
156 The Macrophage Migration Inhibitory Factor and Stem Cell Factor Levels in Serum of Adolescent and Young Adults with Mood Disorders: A Two Year Follow-Up Study

Authors: Aleksandra Rajewska-Rager, Maria Skibinska, Monika Dmitrzak-Weglarz, Natalia Lepczynska, Pawel Kapelski, Joanna Pawlak, Joanna Hauser

Abstract:

Introduction: Inflammation and cytokines have emerged as a promising target in mood disorders research; however there are still very limited numbers of study regarding inflammatory alterations among adolescents and young adults with mood disorders. The Macrophage Migration Inhibitory Factor (MIF) and Stem Cell Factor (SCF) are the pleiotropic cytokines which may play an important role in mood disorders pathophysiology. The aim of this study was to investigate levels of these factors in serum of adolescent and young adults with mood disorders compared to healthy controls. Subjects: We involved 79 patients aged 12-24 years in 2-year follow-up study with a primary diagnosis of mood disorders: bipolar disorder (BP) and unipolar disorder with BP spectrum. Study group includes 23 males (mean age 19.08, SD 3.3) and 56 females (18.39, SD 3.28). Control group consisted 35 persons: 7 males (20.43, SD 4.23) and 28 females (21.25, SD 2.11). Clinical diagnoses according to DSM-IV-TR criteria were assessed using Kiddie-Schedule for Affective Disorders and Schizophrenia-Present and Lifetime Version (K-SADS-PL) and Structured Clinical Interview for the Diagnostic and Statistical Manual (SCID) in young adults respectively. Clinical assessment includes evaluation of clinical factors and symptoms severity (rated using the Hamilton Depression Rating Scale and Young Mania Rating Scale). Clinical and biological evaluations were made at control visits respectively at baseline (week 0), euthymia (at month 3 or 6) and after 12 and 24 months. Methods: Serum protein concentration was determined by Enzyme-Linked Immunosorbent Assays (ELISA) method. Human MIF and SCF DuoSet ELISA kits were used. In the analyses non-parametric tests were used: Mann-Whitney U test, Kruskal-Wallis ANOVA, Friedman’s ANOVA, Wilcoxon signed rank test, Spearman correlation. We defined statistical significance as p < 0.05. Results: Comparing MIF and SCF levels between acute episode of depression/hypo/mania at baseline and euthymia (at month 3 or 6) we did not find any statistical differences. At baseline patients with age above 18 years old had decreased MIF level compared to patients younger than 18 years. MIF level at baseline positively correlated with age (p=0.004). Positive correlations of SCF level at month 3 and 6 with depression or mania occurrence at month 24 (p=0.03 and p=0.04, respectively) was detected. Strong correlations between MIF and SCF levels at baseline (p=0.0005) and month 3 (p=0.03) were observed. Discussion: Our results did not show any differences in MIF and SCF levels between acute episode of depression/hypo/mania and euthymia in young patients. Further studies on larger groups are recommended. Grant was founded by National Science Center in Poland no 2011/03/D/NZ5/06146.

Keywords: cytokines, MIF, mood disorders, SCF

Procedia PDF Downloads 181
155 Effect of Methoxy and Polyene Additional Functionalized Group on the Photocatalytic Properties of Polyene-Diphenylaniline Organic Chromophores for Solar Energy Applications

Authors: Ife Elegbeleye, Nnditshedzeni Eric, Regina Maphanga, Femi Elegbeleye, Femi Agunbiade

Abstract:

The global potential of other renewable energy sources such as wind, hydroelectric, bio-mass, and geothermal is estimated to be approximately 13 %, with hydroelectricity constituting a larger percentage. Sunlight provides by far the largest of all carbon-neutral energy sources. More energy from the sunlight strikes the Earth in one hour (4.3 × 1020 J) than all the energy consumed on the planet in a year (4.1 × 1020 J), hence, solar energy remains the most abundant clean, renewable energy resources for mankind. Photovoltaic (PV) devices such as silicon solar cells, dye sensitized solar cells are utilized for harnessing solar energy. Polyene-diphenylaniline organic molecules are important sets of molecules that has stirred many research interest as photosensitizers in TiO₂ semiconductor-based dye sensitized solar cells (DSSCs). The advantages of organic dye molecule over metal-based complexes are higher extinction coefficient, moderate cost, good environmental compatibility, and electrochemical properties. The polyene-diphenylaniline organic dyes with basic configuration of donor-π-acceptor are affordable, easy to synthesize and possess chemical structures that can easily be modified to optimize their photocatalytic and spectral properties. The enormous interest in polyene-diphenylaniline dyes as photosensitizers is due to their fascinating spectral properties which include visible light to near infra-red-light absorption. In this work, density functional theory approach via GPAW software, Avogadro and ASE were employed to study the effect of methoxy functionalized group on the spectral properties of polyene-diphenylaniline dyes and their photons absorbing characteristics in the visible region to near infrared region of the solar spectrum. Our results showed that the two-phenyl based complexes D5 and D7 exhibits maximum absorption peaks at 750 nm and 850 nm, while D9 and D11 with methoxy group shows maximum absorption peak at 800 nm and 900 nm respectively. The highest absorption wavelength is notable for D9 and D11 containing additional polyene and methoxy groups. Also, D9 and D11 chromophores with the methoxy group shows lower energy gap of 0.98 and 0.85 respectively than the corresponding D5 and D7 dyes complexes with energy gap of 1.32 and 1.08. The analysis of their electron injection kinetics ∆Ginject into the band gap of TiO₂ shows that D9 and D11 with the methoxy group has higher electron injection kinetics of -2.070 and -2.030 than the corresponding polyene-diphenylaniline complexes without the addition of polyene group with ∆Ginject values of -2.820 and -2.130 respectively. Our findings suggest that the addition of functionalized group as an extension of the organic complexes results in higher light harvesting efficiencies and bathochromic shift of the absorption spectra to higher wavelength which suggest higher current densities and open circuit voltage in DSSCs. The study suggests that the photocatalytic properties of organic chromophores/complexes with donor-π-acceptor configuration can be enhanced by the addition of functionalized groups.

Keywords: renewable energy resource, solar energy, dye sensitized solar cells, polyene-diphenylaniline organic chromophores

Procedia PDF Downloads 83
154 Manufacturing the Authenticity of Dokkaebi’s Visual Representation in Tourist Marketing

Authors: Mikyung Bak

Abstract:

The dokkaebi, a beloved icon of Korean culture, is represented as an elf, goblin, monster, dwarf, or any similar creature in different media, such as animated shows, comics, soap operas, and movies. It is often described as a mythical creature with a horn or horns and long teeth, wearing tiger-skin pants or a grass skirt, and carrying a magic stick. Many Korean researchers agree on the similarity of the image of the Korean dokkaebi with that of the Japanese oni, a view that is regard as negative from an anti-colonial or nationalistic standpoint. They cite such similarity between the two mythical creatures as evidence that Japanese colonialism persists in Korea. The debate on the originality of dokkaebi’s visual representation is an issue that must be addressed urgently. This research demonstrates through a diagram the plurality of interpretations of dokkaebi’s visual representations in what are considered ‘authentic’ images of dokkaebi in Korean art and culture. This diagram presents the opinions of four major groups in the debate, namely, the scholars of Korean literature and folklore, art historians, authors, and artists. It also shows the creation of new dokkaebi visual representations in popular media, including those influenced by the debate. The diagram further proves that dokkaebi’s representations varied, which include the typical persons or invisible characters found in Korean literature, original Korean folk characters in traditional art, and even universal spirit characters. They are also visually represented by completely new creatures as well as oni-based mythical beings and the actual oni itself. The earlier dokkaebi representations were driven by the creation of a national ideology or national cultural paradigm and, thus, were more uniform and protected. In contrast, the more recent representations are influenced by the Korean industrial strategy of ‘cultural economics,’ which is concerned with the international rather than the domestic market. This recent Korean cultural strategy emphasizes diversity and commonality with the global culture rather than originality and locality. It employs traditional cultural resources to construct a global image. Consequently, dokkaebi’s recent representations have become more common and diverse, thereby incorporating even oni’s characteristics. This argument has rendered the grounds of the debate irrelevant. The dokkaebi has been used recently for tourist marketing purposes, particularly in revitalizing interest in regions considered the cradle of various traditional dokkaebi tales. These campaign strategies include the Jeju-do Dokkaebi Park, Koksung Dokkaebi Land, as well as the Taebaek and Sokri-san Dokkaebi Festivals. Almost dokkaebi characters are identical to the Japanese oni in tourist marketing. However, the pursuit for dokkaebi’s authentic visual representation is less interesting and fruitful than the appreciation of the entire spectrum of dokkaebi images that have been created. Thus, scholars and stakeholders must not exclude the possibilities for a variety of potentials within the visual culture. The same sentiment applies to traditional art and craft. This study aims to contribute to a new visualization of the dokkaebi that embraces the possibilities of both folk craft and art, which continue to be uncovered by diverse and careful researchers in a still-developing field.

Keywords: Dokkaebi, post-colonial period, representation, tourist marketing

Procedia PDF Downloads 256
153 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 99
152 Communicating Nuclear Energy in Southeast Asia: A Cross-Country Comparison of Communication Channels and Source Credibility

Authors: Shirley S. Ho, Alisius X. L. D. Leong, Jiemin Looi, Agnes S. F. Chuah

Abstract:

Nuclear energy is a contentious technology that has attracted much public debate over the years. The prominence of nuclear energy in Southeast Asia (SEA) has burgeoned due to the surge of interest and plans for nuclear development in the region. Understanding public perceptions of nuclear energy in SEA is pertinent given the limited number of studies conducted. In particular, five SEA nations – Singapore, Malaysia, Indonesia, Thailand, and Vietnam are of immediate interest as that they are amongst the most economically developed or developing nations in the SEA region. High energy demands from economic development in these nations have led to considerations of adopting nuclear energy as an alternative source of energy. This study aims to explore whether differences in the nuclear developmental stage in each country affects public perceptions of nuclear energy. In addition, this study seeks to find out about the type and importance of communication credibility as a judgement heuristic in facilitating message acceptance across these five countries. Credibility of a communication channel is a crucial component influencing public perception, acceptance, and attitudes towards nuclear energy. Aside from simply identifying the frequently used communication channels, it is of greater significance to understand public perception of source and media credibility. Given the lack of studies conducted in SEA, this exploratory study adopts a qualitative approach to elicit a spectrum of opinions and insights regarding the key communication aspects influencing public perceptions of nuclear energy. Specifically, the capitals of each of the abovementioned countries - Kuala Lumpur, Bangkok, and Hanoi - were selected, with the exception of Singapore, an island city-state, and Yogyakarta, the most populous island of Indonesia to better understand public perception towards nuclear energy. Focus group discussions were utilized as the mode of data collection to elicit a wide variety of viewpoints held by the participants, which is well-suited for exploratory research. In total, 156 participants took part in the 13 focus group discussions. The participants were either local citizens or permanent residents aged between 18 and 69 years old. Each of the focus groups consists of 8-10 participants, including both male and female participants. The transcripts from each focus group were analysed using NVivo 10, and the text was organised according to the emerging themes or categories. The general public in all the countries was familiar but had no in-depth knowledge with nuclear energy. Four dimensions of nuclear energy communication were identified based on the focus group discussions: communication channels, perceived credibility of sources, circumstances for discussion, and discussion style. The first dimension, communication channels refers to the medium through which participants receive information about nuclear energy. Four types of media emerged from the discussions. They included online and social media, broadcast media, print media, and word-of- mouth (WOM). Collectively, across all five countries, participants were found to engage in different types of knowledge acquisition and information seeking behavior depending on the communication channels used.

Keywords: nuclear energy, public perception, communication, Southeast Asia, source credibility

Procedia PDF Downloads 291
151 Development and application of Humidity-Responsive Controlled Release Active Packaging Based on Electrospinning Nanofibers and In Situ Growth Polymeric Film in Food preservation

Authors: Jin Yue

Abstract:

Fresh produces especially fruits, vegetables, meats and aquatic products have limited shelf life and are highly susceptible to deterioration. Essential oils (EOs) extracted from plants have excellent antioxidant and broad-spectrum antibacterial activities, and they can play as natural food preservatives. But EOs are volatile, water insoluble, pungent, and easily decomposing under light and heat. Many approaches have been developed to improve the solubility and stability of EOs such as polymeric film, coating, nanoparticles, nano-emulsions and nanofibers. Construction of active packaging film which can incorporate EOs with high loading efficiency and controlled release of EOs has received great attention. It is still difficult to achieve accurate release of antibacterial compounds at specific target locations in active packaging. In this research, a relative humidity-responsive packaging material was designed, employing the electrospinning technique to fabricate a nanofibrous film loaded with a 4-terpineol/β-cyclodextrin inclusion complexes (4-TA/β-CD ICs). Functioning as an innovative food packaging material, the film demonstrated commendable attributes including pleasing appearance, thermal stability, mechanical properties, and effective barrier properties. The incorporation of inclusion complexes greatly enhanced the antioxidant and antibacterial activity of the film, particularly against Shewanella putrefaciens, with an inhibitory efficiency of up to 65%. Crucially, the film realized controlled release of 4-TA under 98% high relative humidity conditions by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. To further improve the loading efficiency and long-acting release of EOs, we synthesized the γ-cyclodextrin-metal organic frameworks (γ-CD-MOFs), and then efficiently anchored γ-CD-MOFs on chitosan-cellulose (CS-CEL) composite film by in situ growth method for controlled releasing of carvacrol (CAR). We found that the growth efficiency of γ-CD-MOFs was the highest when the concentration of CEL dispersion was 5%. The anchoring of γ-CD-MOFs on CS-CEL film significantly improved the surface area of CS-CEL film from 1.0294 m2/g to 43.3458 m2/g. The molecular docking and 1H NMR spectra indicated that γ-CD-MOF has better complexing and stabilizing ability for CAR molecules than γ-CD. In addition, the release of CAR reached 99.71±0.22% on the 10th day, while under 22% RH, the release pattern of CAR was a plateau with 14.71 ± 4.46%. The inhibition rate of this film against E. coli, S. aureus and B. cinerea was more than 99%, and extended the shelf life of strawberries to 7 days. By incorporating the merits of natural biopolymers and MOFs, this active packaging offers great potential as a substitute for traditional packaging materials.

Keywords: active packaging, antibacterial activity, controlled release, essential oils, food quality control

Procedia PDF Downloads 41
150 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 223
149 Prevalence and Molecular Characterization of Extended-Spectrum–β Lactamase and Carbapenemase-Producing Enterobacterales from Tunisian Seafood

Authors: Mehdi Soula, Yosra Mani, Estelle Saras, Antoine Drapeau, Raoudha Grami, Mahjoub Aouni, Jean-Yves Madec, Marisa Haenni, Wejdene Mansour

Abstract:

Multi-resistance to antibiotics in gram-negative bacilli and particularly in enterobacteriaceae, has become frequent in hospitals in Tunisia. However, data on antibiotic resistant bacteria in aquatic products are scarce. The aims of this study are to estimate the proportion of ESBL- and carbapenemase-producing Enterobacterales in seafood (clams and fish) in Tunisia and to molecularly characterize the collected isolates. Two types of seafood were sampled in unrelated markets in four different regions in Tunisia (641 pieces of farmed fish and 1075 mediterranean clams divided into 215 pools, and each pool contained 5 pieces). Once purchased, all samples were incubated in tubes containing peptone salt broth for 24 to 48h at 37°C. After incubation, overnight cultures were isolated on selective MacConkey agar plates supplemented with either imipenem or cefotaxime, identified using API20E test strips (bioMérieux, Marcy-l’Étoile, France) and confirmed by Maldi-TOF MS. Antimicrobial susceptibility was determined by the disk diffusion method on Mueller-Hinton agar plates and results were interpreted according to CA-SFM 2021. ESBL-producing Enterobacterales were detected using the Double Disc Synergy Test (DDST). Carbapenem-resistance was detected using an ertapenem disk and was respectively confirmed using the ROSCO KPC/MBL and OXA-48 Confirm Kit (ROSCO Diagnostica, Taastrup, Denmark). DNA was extracted using a NucleoSpin Microbial DNA extraction kit (Macherey-Nagel, Hoerdt, France), according to the manufacturer’s instructions. Resistance genes were determined using the CGE online tools. The replicon content and plasmid formula were identified from the WGS data using PlasmidFinder 2.0.1 and pMLST 2.0. From farmed fishes, nine ESBL-producing strains (9/641, 1.4%) were isolated, which were identified as E. coli (n=6) and K. pneumoniae (n=3). Among the 215 pools of 5 clams analyzed, 18 ESBL-producing isolates were identified, including 14 E. coli and 4 K. pneumoniae. A low isolation rate of ESBL-producing Enterobacterales was detected 1.6% (18/1075) in clam pools. In fish, the ESBL phenotype was due to the presence of the blaCTX-M-15 gene in all nine isolates, but no carbapenemase gene was identified. In clams, the predominant ESBL phenotype was blaCTX-M-1 (n=6/18). blaCPE (NDM1, OXA48) was detected only in 3 isolates ‘K. pneumoniae isolates’. Replicon typing on the strains carring the ESBL and carbapenemase gene revelead that the major type plasmid carried ESBL were IncF (42.3%) [n=11/26]. In all, our results suggest that seafood can be a reservoir of multi-drug resistant bacteria, most probably of human origin but also by the selection pressure of antibiotic. Our findings raise concerns that seafood bought for consumption may serve as potential reservoirs of AMR genes and pose serious threat to public health.

Keywords: BLSE, carbapenemase, enterobacterales, tunisian seafood

Procedia PDF Downloads 88
148 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran

Authors: Fatemeh Faramarzi, Hosein Mahjoob

Abstract:

Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.

Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6

Procedia PDF Downloads 297
147 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 154
146 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 88
145 Numerical Investigations of Unstable Pressure Fluctuations Behavior in a Side Channel Pump

Authors: Desmond Appiah, Fan Zhang, Shouqi Yuan, Wei Xueyuan, Stephen N. Asomani

Abstract:

The side channel pump has distinctive hydraulic performance characteristics over other vane pumps because of its generation of high pressure heads in only one impeller revolution. Hence, there is soaring utilization and application in the fields of petrochemical, food processing fields, automotive and aerospace fuel pumping where high heads are required at low flows. The side channel pump is characterized by unstable flow because after fluid flows into the impeller passage, it moves into the side channel and comes back to the impeller again and then moves to the next circulation. Consequently, the flow leaves the side channel pump following a helical path. However, the pressure fluctuation exhibited in the flow greatly contributes to the unwanted noise and vibration which is associated with the flow. In this paper, a side channel pump prototype was examined thoroughly through numerical calculations based on SST k-ω turbulence model to ascertain the pressure fluctuation behavior. The pressure fluctuation intensity of the 3D unstable flow dynamics were carefully investigated under different working conditions 0.8QBEP, 1.0 QBEP and 1.2QBEP. The results showed that the pressure fluctuation distribution around the pressure side of the blade is greater than the suction side at the impeller and side channel interface (z=0) for all three operating conditions. Part-load condition 0.8QBEP recorded the highest pressure fluctuation distribution because of the high circulation velocity thus causing an intense exchanged flow between the impeller and side channel. Time and frequency domains spectra of the pressure fluctuation patterns in the impeller and the side channel were also analyzed under the best efficiency point value, QBEP using the solution from the numerical calculations. It was observed from the time-domain analysis that the pressure fluctuation characteristics in the impeller flow passage increased steadily until the flow reached the interrupter which separates low-pressure at the inflow from high pressure at the outflow. The pressure fluctuation amplitudes in the frequency domain spectrum at the different monitoring points depicted a gentle decreasing trend of the pressure amplitudes which was common among the operating conditions. The frequency domain also revealed that the main excitation frequencies occurred at 600Hz, 1200Hz, and 1800Hz and continued in the integers of the rotating shaft frequency. Also, the mass flow exchange plots indicated that the side channel pump is characterized with many vortex flows. Operating conditions 0.8QBEP, 1.0 QBEP depicted less and similar vortex flow while 1.2Q recorded many vortex flows around the inflow, middle and outflow regions. The results of the numerical calculations were finally verified experimentally. The performance characteristics curves from the simulated results showed that 0.8QBEP working condition recorded a head increase of 43.03% and efficiency decrease of 6.73% compared to 1.0QBEP. It can be concluded that for industrial applications where the high heads are mostly required, the side channel pump can be designed to operate at part-load conditions. This paper can serve as a source of information in order to optimize a reliable performance and widen the applications of the side channel pumps.

Keywords: exchanged flow, pressure fluctuation, numerical simulation, side channel pump

Procedia PDF Downloads 112
144 Thermo-Mechanical Processing Scheme to Obtain Micro-Duplex Structure Favoring Superplasticity in an As-Cast and Homogenized Medium Alloyed Nickel Base Superalloy

Authors: K. Sahithya, I. Balasundar, Pritapant, T. Raghua

Abstract:

Ni-based superalloy with a nominal composition Ni-14% Cr-11% Co-5.8% Mo-2.4% Ti-2.4% Nb-2.8% Al-0.26 % Fe-0.032% Si-0.069% C (all in wt %) is used as turbine discs in a variety of aero engines. Like any other superalloy, the primary processing of the as-cast superalloy poses a major challenge due to its complex alloy chemistry. The challenge was circumvented by characterizing the different phases present in the material, optimizing the homogenization treatment, identifying a suitable thermomechanical processing window using dynamic materials modeling. The as-cast material was subjected to homogenization at 1200°C for a soaking period of 8 hours and quenched using different media. Water quenching (WQ) after homogenization resulted in very fine spherical γꞌ precipitates of sizes 30-50 nm, whereas furnace cooling (FC) after homogenization resulted in bimodal distribution of precipitates (primary gamma prime of size 300nm and secondary gamma prime of size 5-10 nm). MC type primary carbides that are stable till the melting point of the material were found in both WQ and FC samples. Deformation behaviour of both the materials below (1000-1100°C) and above gamma prime solvus (1100-1175°C) was evaluated by subjecting the material to series of compression tests at different constant true strain rates (0.0001/sec-1/sec). An in-detail examination of the precipitate dislocation interaction mechanisms carried out using TEM revealed precipitate shearing and Orowan looping as the mechanisms governing deformation in WQ and FC, respectively. Incoherent/semi coherent gamma prime precipitates in the case of FC material facilitates better workability of the material, whereas the coherent precipitates in WQ material contributed to higher resistance to deformation of the material. Both the materials exhibited discontinuous dynamic recrystallization (DDRX) above gamma prime solvus temperature. The recrystallization kinetics was slower in the case of WQ material. Very fine grain boundary carbides ( ≤ 300 nm) retarded the recrystallisation kinetics in WQ. Coarse carbides (1-5 µm) facilitate particle stimulated nucleation in FC material. The FC material was cogged (primary hot working) 1120˚C, 0.03/sec resulting in significant grain refinement, i.e., from 3000 μm to 100 μm. The primary processed material was subjected to intensive thermomechanical deformation subsequently by reducing the temperature by 50˚C in each processing step with intermittent heterogenization treatment at selected temperatures aimed at simultaneous coarsening of the gamma prime precipitates and refinement of the gamma matrix grains. The heterogeneous annealing treatment carried out, resulted in gamma grains of 10 μm and gamma prime precipitates of 1-2 μm. Further thermo mechanical processing of the material was carried out at 1025˚C to increase the homogeneity of the obtained micro-duplex structure.

Keywords: superalloys, dynamic material modeling, nickel alloys, dynamic recrystallization, superplasticity

Procedia PDF Downloads 105
143 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column

Authors: G. Rajapakse, S. Jayasinghe, A. Fleming

Abstract:

This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.

Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter

Procedia PDF Downloads 96
142 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 82
141 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation

Authors: Jessica Cecchinelli, Amer Ali

Abstract:

The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.

Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction

Procedia PDF Downloads 409
140 Detection of Triclosan in Water Based on Nanostructured Thin Films

Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo

Abstract:

Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.

Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue

Procedia PDF Downloads 229
139 Rendering Religious References in English: Naguib Mahfouz in the Arabic as a Foreign Language Classroom

Authors: Shereen Yehia El Ezabi

Abstract:

The transition from the advanced to the superior level of Arabic proficiency is widely known to pose considerable challenges for English speaking students of Arabic as a Foreign Language (AFL). Apart from the increasing complexity of the grammar at this juncture, together with the sprawling vocabulary, to name but two of those challenges, there is also the somewhat less studied hurdle along the way to superior level proficiency, namely, the seeming opacity of many aspects of Arab/ic culture to such learners. This presentation tackles one specific dimension of such issues: religious references in literary texts. It illustrates how carefully constructed translation activities may be used to expand and deepen students’ understanding and use of them. This is shown to be vital for making the leap to the desired competency, given that such elements, as reflected in customs, traditions, institutions, worldviews, and formulaic expressions lie at the very core of Arabic culture and, as such, pervade all modes and levels of Arabic discourse. A short story from the collection “Stories from Our Alley”, by preeminent novelist Naguib Mahfouz is selected for use in this context, being particularly replete with such religious references, of which religious expressions will form the focus of the presentation. As a miniature literary work, it provides an organic whole, so to speak, within which to explore with the class the most precise denotation, as well as the subtlest connotation of each expression in an effort to reach the ‘best’ English rendering. The term ‘best’ refers to approximating the meaning in its full complexity from the source text, in this case Arabic, to the target text, English, according to the concept of equivalence in translation theory. The presentation will show how such a process generates the sort of thorough discussion and close text analysis which allows students to gain valuable insight into this central idiom of Arabic. A variety of translation methods will be highlighted, gleaned from the presenter’s extensive work with advanced/superior students in the Center for Arabic Study Abroad (CASA) program at the American University in Cairo. These begin with the literal rendering of expressions, with the purpose of reinforcing vocabulary learning and practicing the rules of derivational morphology as they form each word, since the larger context remains that of an AFL class, as opposed to a translation skills program. However, departures from the literal approach are subsequently explored by degrees, moving along the spectrum of functional and pragmatic freer translations in order to transmit the ‘real’ meaning in readable English to the target audience- no matter how culture/religion specific the expression- while remaining faithful to the original. Samples from students’ work pre and post discussion will be shared, demonstrating how class consensus is formed as to the final English rendering, proposed as the closest match to the Arabic, and shown to be the result of the above activities. Finally, a few examples of translation work which students have gone on to publish will be shared to corroborate the effectiveness of this teaching practice.

Keywords: superior level proficiency in Arabic as a foreign language, teaching Arabic as a foreign language, teaching idiomatic expressions, translation in foreign language teaching

Procedia PDF Downloads 174
138 Documentary Filmmaking as Activism: Case Studies in Advocacy and Social Justice

Authors: Babatunde Kolawole

Abstract:

This paper embarks on an exploration of the compelling interplay between documentary filmmaking and activism, delving into their symbiotic relationship and profound impact on advocacy and social justice causes. Through an in-depth analysis of diverse case studies, it seeks to illuminate the instances where documentary films have emerged as potent tools for effecting social change and advancing the principles of justice. This research underscores the vital role played by documentary filmmakers in harnessing the medium's unique capacity to engage, educate, and mobilize audiences while advocating for societal transformation. The primary focus of this study is on a selection of compelling case studies spanning various topics and causes, each exemplifying the marriage between documentary filmmaking and activism. These case studies encompass a broad spectrum of subjects, from environmental conservation and climate change to civil rights movements and human rights struggles. By examining these real-world instances, this paper endeavors to provide a comprehensive understanding of the strategies, challenges, and ethical considerations that underpin the practice of documentary filmmaking as a form of activism. Throughout the paper, it becomes evident that the potency of documentary filmmaking lies in its ability to blend artistry with social impact. The selected case studies vividly demonstrate how documentary filmmakers, armed with cameras and a passion for change, have emerged as critical agents of societal transformation. Whether it be exposing environmental atrocities, shedding light on systemic inequalities, or giving voice to marginalized communities, these documentaries have played a pivotal role in pushing the boundaries of advocacy and social justice. One of the key themes explored in this paper is the evolving nature of documentary filmmaking as a tool for activism. It delves into the shift from traditional observational documentaries to more participatory and immersive approaches, highlighting the dynamic ways in which filmmakers engage with their subjects and audiences. This evolution is exemplified in case studies where filmmakers have collaborated with the communities they document, fostering a sense of agency and empowerment among those whose stories are being told. Furthermore, this research underscores the ethical considerations inherent in the intersection of documentary filmmaking and activism. It scrutinizes questions surrounding representation, objectivity, and the responsibility of filmmakers in portraying complex social issues. By dissecting ethical dilemmas faced by documentary filmmakers in these case studies, this paper encourages a critical examination of the ethical boundaries and obligations in the realm of advocacy-driven filmmaking. In conclusion, this paper aims to shed light on the remarkable potential of documentary filmmaking as a catalyst for activism and social justice. Through the lens of compelling case studies, it illustrates the transformative power of the medium in effecting change, amplifying underrepresented voices, and mobilizing global audiences. It is hoped that this research will not only inform the discourse on documentary activism but also inspire filmmakers, scholars, and advocates to continue leveraging the cinematic art form as a formidable force for a more just and equitable world.

Keywords: film, filmmaker, documentary, human right

Procedia PDF Downloads 33