Search results for: fact checking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2918

Search results for: fact checking

38 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 158
37 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favorable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. Psychology research has demonstrated that positive comprehension becomes possible when new information becomes part of student’s subjective experience and when linkages between the attributes of notions and various ways of their presentations can be established. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. The article describes the implementation of a holistic approach to teaching mathematics designed to address the primary challenges of such teaching, specifically, the challenge of students’ comprehension. This approach consists of (1) establishing links between the attributes of a notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience -emotional and value, contextual, procedural, communicative- during the educational process; (3) links between different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modeling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and technology influence understanding of material used in teaching mathematics was the research’s primary goal. The research included an experiment in which 256 secondary school students took part: 142 in the experimental group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics -'Derivative' and 'Trigonometric functions'- was evaluated. Control group participants were taught using traditional methods. Students in the experimental group were taught using the holistic method: under the teacher’s guidance, they carried out problems designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as problems that required the ability to operate with all modes of presentation. The use of the technology that forms inter-subject notions based on linkages between perceptional, real, and conceptual mathematical spaces proved to be of special interest to the students. Results of the experiment were analyzed by presenting students in each of the groups with a final test in each of the studied topics. The test included problems that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion was used to reveal the statistical significance of results (pass-fail the modeling test). A significant difference in results was revealed (p < 0.001), which allowed the authors to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. Also, it was revealed (used Student’s t-test) that the students of the experimental group performed reliably (p = 0.0001) more problems in comparison with those in the control group. The results obtained allow us to conclude that increasing comprehension and assimilation of study material took place as a result of applying implemented methods and techniques.

Keywords: comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions

Procedia PDF Downloads 150
36 Residential Building Facade Retrofit

Authors: Galit Shiff, Yael Gilad

Abstract:

The need to retrofit old buildings lies in the fact that buildings are responsible for the main energy use and CO₂ emission. Existing old structures are more dominant in their effect than new energy-efficient buildings. Nevertheless not every case of urban renewal that aims to replace old buildings with new neighbourhoods necessarily has a financial or sustainable justification. Façade design plays a vital role in the building's energy performance and the unit's comfort conditions. A retrofit façade residential methodology and feasibility applicative study has been carried out for the past four years, with two projects already fully renovated. The intention of this study is to serve as a case study for limited budget façade retrofit in Mediterranean climate urban areas. The two case study buildings are set in Israel. However, they are set in different local climatic conditions. One is in 'Sderot' in the south of the country, and one is in' Migdal Hahemek' in the north of the country. The building typology is similar. The budget of the projects is around $14,000 per unit and includes interventions at the buildings' envelope while tenants are living in. Extensive research and analysis of the existing conditions have been done. The building's components, materials and envelope sections were mapped, examined and compared to relevant updated standards. Solar radiation simulations for the buildings in their surroundings during winter and summer days were done. The energy rate of each unit, as well as the building as a whole, was calculated according to the Israeli Energy Code. The buildings’ facades were documented with the use of a thermal camera during different hours of the day. This information was superimposed with data about the electricity use and the thermal comfort that was collected from the residential units. Later in the process, similar tools were further used in order to compare the effectiveness of different design options and to evaluate the chosen solutions. Both projects showed that the most problematic units were the ones below the roof and the ones on top of the elevated entrance floor (pilotis). Old buildings tend to have poor insulation on those two horizontal surfaces which require treatment. Different radiation levels and wall sections in the two projects influenced the design strategies: In the southern project, there was an extreme difference in solar radiations levels between the main façade and the back elevation. Eventually, it was decided to invest in insulating the main south-west façade and the side façades, leaving the back north-east façade almost untouched. Lower levels of radiation in the northern project led to a different tactic: a combination of basic insulation on all façades, together with intense treatment on areas with problematic thermal behavior. While poor execution of construction details and bad installation of windows in the northern project required replacing them all, in the southern project it was found that it is more essential to shade the windows than replace them. Although the buildings and the construction typology was chosen for this study are similar, the research shows that there are large differences due to the location in different climatic zones and variation in local conditions. Therefore, in order to reach a systematic and cost-effective method of work, a more extensive catalogue database is needed. Such a catalogue will enable public housing companies in the Mediterranean climate to promote massive projects of renovating existing old buildings, drawing on minimal analysis and planning processes.

Keywords: facade, low budget, residential, retrofit

Procedia PDF Downloads 174
35 Sheep Pox Virus Recombinant Proteins To Develop Subunit Vaccines

Authors: Olga V. Chervyakova, Elmira T. Tailakova, Vitaliy M. Strochkov, Kulyaisan T. Sultankulova, Nurlan T. Sandybayev, Lev G. Nemchinov, Rosemarie W. Hammond

Abstract:

Sheep pox is a highly contagious infection that OIE regards to be one of the most dangerous animal diseases. It causes enormous economic losses because of death and slaughter of infected animals, lower productivity, cost of veterinary and sanitary as well as quarantine measures. To control spread of sheep pox infection the attenuated vaccines are widely used in the Republic of Kazakhstan and other Former Soviet Union countries. In spite of high efficiency of live vaccines, the possible presence of the residual virulence, potential genetic instability restricts their use in disease-free areas that leads to necessity to exploit new approaches in vaccine development involving recombinant DNA technology. Vaccines on the basis of recombinant proteins are the newest generation of prophylactic preparations. The main advantage of these vaccines is their low reactogenicity and this fact makes them widely used in medical and veterinary practice for vaccination of humans and farm animals. The objective of the study is to produce recombinant immunogenic proteins for development of the high-performance means for sheep pox prophylaxis. The SPV proteins were chosen for their homology with the known immunogenic vaccinia virus proteins. Assay of nucleotide and amino acid sequences of the target SPV protein genes. It has been shown that four proteins SPPV060 (ortholog L1), SPPV074 (ortholog H3), SPPV122 (ortholog A33) and SPPV141 (ortholog B5) possess transmembrane domains at N- or C-terminus while in amino acid sequences of SPPV095 (ortholog А 4) and SPPV117 (ortholog А 27) proteins these domains were absent. On the basis of these findings the primers were constructed. Target genes were amplified and subsequently cloned into the expression vector рЕТ26b(+) or рЕТ28b(+). Six constructions (pSPPV060ΔТМ, pSPPV074ΔТМ, pSPPV095, pSPPV117, pSPPV122ΔТМ and pSPPV141ΔТМ) were obtained for expression of the SPV genes under control of T7 promoter in Escherichia coli. To purify and detect recombinant proteins the amino acid sequences were modified by adding six histidine molecules at C-terminus. Induction of gene expression by IPTG was resulted in production of the proteins with molecular weights corresponding to the estimated values for SPPV060, SPPV074, SPPV095, SPPV117, SPPV122 and SPPV141, i.e. 22, 30, 20, 19, 17 and 22 kDa respectively. Optimal protocol of expression for each gene that ensures high yield of the recombinant protein was identified. Assay of cellular lysates by western blotting confirmed expression of the target proteins. Recombinant proteins bind specifically with antibodies to polyhistidine. Moreover all produced proteins are specifically recognized by the serum from experimentally SPV-infected sheep. The recombinant proteins SPPV060, SPPV074, SPPV117, SPPV122 and SPPV141 were also shown to induce formation of antibodies with virus-neutralizing activity. The results of the research will help to develop a new-generation high-performance means for specific sheep pox prophylaxis that is one of key moments in animal health protection. The research was conducted under the International project ISTC # K-1704 “Development of methods to construct recombinant prophylactic means for sheep pox with use of transgenic plants” and under the Grant Project RK MES G.2015/0115RK01983 "Recombinant vaccine for sheep pox prophylaxis".

Keywords: prophylactic preparation, recombinant protein, sheep pox virus, subunit vaccine

Procedia PDF Downloads 220
34 RE:SOUNDING a 2000-Year-Old Vietnamese Dong Son Bronze Drum; Artist-Led Collaborations outside the Museum to Challenge the Impasse of Repatriating and Rematriating Cultural Instruments

Authors: H. A. J. Nguyen, V. A. Pham

Abstract:

RE:SOUNDING is an ongoing research project and artwork seeking to return the sound and knowledge of Dong Son bronze drums back to contemporary musicians. Colonial collections of ethnographic instruments are problematic in how they commit acts of conceptual, cultural, and acoustic silencing. The collection (or more honestly), the plagiarism, and pillaging of these instruments have systemically separated them from living and breathing cultures. This includes diasporic communities, who have come to resettle in close proximity - but still have little access - to the museums and galleries that display their cultural objects. Despite recent attempts to 'open up' and 'recognise' the tensions and violence of these ethnographic collections, many museums continue to structurally organize and reproduce knowledge with the same procedural distance and limitations of imperial condescension. Impatient with the slowness of these museums, our diaspora led collaborations participated in the opaque economy of the auction market to gain access and begin the process of digitally recording and archiving the actual sounds of the ancient Dong Son drum. This self-directed, self-initiated artwork not only acoustically reinvigorated an ancient instrument but redistributed these sonic materials back to contemporary musicians, composers, and their diasporic communities throughout Vietnam, South East Asia, and Australia. Our methodologies not only highlight the persistent inflexibility of museum infrastructures but demand that museums refrain from their paternalistic practice of risk-averse ownership, to seriously engage with new technologies and political formations that require all public institutions to be held accountable for the ethical and intellectual viability of their colonial collections. The integrated and practical resolve of diasporic artists and their communities are more than capable of working with new technologies to reclaim and reinvigorate what is culturally and spiritually theirs. The motivation to rematriate – as opposed to merely repatriate – the acoustic legacies of these instruments to contemporary musicians and artists is a new model for decolonial and restorative practices. Exposing the inadequacies of western scholarship that continues to treat these instruments as discreet, disembodied, and detached artifacts, these collaborative strategies have thus far produced a wealth of new knowledge – new to the west perhaps – but not that new to these, our own communities. This includes the little-acknowledged fact that the Dong Son drum were political instruments of war and technology, rather than their simplistic description in the museum and western academia as agrarian instruments of fertility and harvest. Through the collective and continued sharing of knowledge and sound materials produced from this research, these drums are gaining a contemporary relevance beyond the cultural silencing of the museum display cabinet. Acknowledgement: We acknowledge the Wurundjeri and Boon Wurrung of the Kulin Nation and the Gadigal of the Eora Nation where we began this project. We pay our respects to the Peoples, Lands, Traditional Custodians, Practices, and Creator Ancestors of these Great Nations, as well as those First Nations peoples throughout Australia, Vietnam, and Indonesia, where this research continues, and upon whose stolen lands and waterways were never ceded.

Keywords: acoustic archaeology, decolonisation, museum collections, rematriation, repatriation, Dong Son, experimental music, digital recording

Procedia PDF Downloads 116
33 Holistic Urban Development: Incorporating Both Global and Local Optimization

Authors: Christoph Opperer

Abstract:

The rapid urbanization of modern societies and the need for sustainable urban development demand innovative solutions that meet both individual and collective needs while addressing environmental concerns. To address these challenges, this paper presents a study that explores the potential of spatial and energetic/ecological optimization to enhance the performance of urban settlements, focusing on both architectural and urban scales. The study focuses on the application of biological principles and self-organization processes in urban planning and design, aiming to achieve a balance between ecological performance, architectural quality, and individual living conditions. The research adopts a case study approach, focusing on a 10-hectare brownfield site in the south of Vienna. The site is surrounded by a small-scale built environment as an appropriate starting point for the research and design process. However, the selected urban form is not a prerequisite for the proposed design methodology, as the findings can be applied to various urban forms and densities. The methodology used in this research involves dividing the overall building mass and program into individual small housing units. A computational model has been developed to optimize the distribution of these units, considering factors such as solar exposure/radiation, views, privacy, proximity to sources of disturbance (such as noise), and minimal internal circulation areas. The model also ensures that existing vegetation and buildings on the site are preserved and incorporated into the optimization and design process. The model allows for simultaneous optimization at two scales, architectural and urban design, which have traditionally been addressed sequentially. This holistic design approach leads to individual and collective benefits, resulting in urban environments that foster a balance between ecology and architectural quality. The results of the optimization process demonstrate a seemingly random distribution of housing units that, in fact, is a densified hybrid between traditional garden settlements and allotment settlements. This urban typology is selected due to its compatibility with the surrounding urban context, although the presented methodology can be extended to other forms of urban development and density levels. The benefits of this approach are threefold. First, it allows for the determination of ideal housing distribution that optimizes solar radiation for each building density level, essentially extending the concept of sustainable building to the urban scale. Second, the method enhances living quality by considering the orientation and positioning of individual functions within each housing unit, achieving optimal views and privacy. Third, the algorithm's flexibility and robustness facilitate the efficient implementation of urban development with various stakeholders, architects, and construction companies without compromising its performance. The core of the research is the application of global and local optimization strategies to create efficient design solutions. By considering both, the performance of individual units and the collective performance of the urban aggregation, we ensure an optimal balance between private and communal benefits. By promoting a holistic understanding of urban ecology and integrating advanced optimization strategies, our methodology offers a sustainable and efficient solution to the challenges of modern urbanization.

Keywords: sustainable development, self-organization, ecological performance, solar radiation and exposure, daylight, visibility, accessibility, spatial distribution, local and global optimization

Procedia PDF Downloads 37
32 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator

Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic

Abstract:

The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.

Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion

Procedia PDF Downloads 37
31 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks

Authors: Andrew D. Henshaw, James M. Austin

Abstract:

Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.

Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money

Procedia PDF Downloads 50
30 Ecotoxicological Test-Battery for Efficiency Assessment of TiO2 Assisted Photodegradation of Emerging Micropolluants

Authors: Ildiko Fekete-Kertesz, Jade Chaker, Sylvain Berthelot, Viktoria Feigl, Monika Molnar, Lidia Favier

Abstract:

There has been growing concern about emerging micropollutants in recent years, because of the possible environmental and health risk posed by these substances, which are released into the environment as a consequence of anthropogenic activities. Among them pharmaceuticals are currently not considered under water quality regulations; however, their potential effect on the environment have become more frequent in recent years. Due to the fact that these compounds can be detected in natural water matrices, it can be concluded, that the currently applied water treatment processes are not efficient enough for their effective elimination. To date, advanced oxidation processes (AOPs) are considered as highly competitive water treatment technologies for the removal of those organic micropollutants not treatable by conventional techniques due to their high chemical stability and/or low biodegradability. AOPs such as (photo)chemical oxidation and heterogeneous photocatalysis have proven their potential in degrading harmful organic compounds from aqueous matrices. However, some of these technologies generate reaction by-products, which can even be more toxic to aquatic organisms than the parent compounds. Thus, target compound removal does not necessarily result in the removal of toxicity. Therefore, to evaluate process efficiency the determination of the toxicity and ecotoxicity of the reaction intermediates is crucial to estimate the environmental risk of such techniques. In this context, the present study investigates the effectiveness of TiO2 assisted photodegradation for the removal of emerging water contaminants. Two drugs named losartan (used in high blood pressure medication) and levetiracetam (used to treat epilepsy) were considered in this work. The photocatalytic reactions were carried out with a commercial catalyst usually employed in photocatalysis. Moreover, the toxicity of the by-products generated during the process was assessed with various ecotoxicological methods applying aquatic test organisms from different trophic levels. A series of experiments were performed to evaluate the toxicity of untreated and treated solutions applying the Aliivibrio fischeri bioluminescence inhibition test, the Tetrahymena pyriformis proliferation inhibition test, the Daphnia magna lethality and immobilization tests and the Lemna minor growth inhibition test. The applied ecotoxicological methodology indicated sensitively the toxic effects of the treated and untreated water samples, hence the applied test battery is suitable for the ecotoxicological characterization of TiO2 based photocatalytic water treatment technologies and the indication of the formation of toxic by-products from the parent chemical compounds. Obtained results clearly showed that the TiO2 assisted photodegradation was more efficient in the elimination of losartan than levetiracetam. It was also observed that the treated levetiracetam solutions had more severe effect on the applied test organisms. A possible explanation would be the production of levetiracetam by-products, which are more toxic than the parent compound. The increased toxicity and the risk of formation of toxic metabolites represent one possible limitation to the implementation of photocatalytic treatment using TiO2 for the removal of losartan and levetiracetam. Our results proved that, the battery of ecotoxicity tests used in this work can be a promising investigation tool for the environmental risk assessment of photocatalytic processes.

Keywords: aquatic micropollutants, ecotoxicology, nano titanium dioxide, photocatalysis, water treatment

Procedia PDF Downloads 166
29 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 108
28 Basic Characteristics of Synchronized Stir Welding and Its Prospects

Authors: Ipei Sato, Naonori Shibata, Shoji Matsumoto, Naruhito Matsumoto

Abstract:

Friction stir welding (FSW) has been widely used in the automotive, aerospace, and high-tech industries due to its superiority in mechanical properties after joining. In order to achieve a good quality joint by friction stir welding (FSW), it is necessary to secure an advanced angle (usually 3 to 5 degrees) using a dedicated FSW machine and to join on a highly rigid machine. On the other hand, although recently, a new combined machine that combines the cutting function of a conventional machining center with the FSW function has appeared on the market, its joining process window is small, so joining defects easily occur, and it lacks reproducibility, which limits its application to the automotive industry, where control accuracy is required. This has limited the use of FSW machines in the automotive industry, where control accuracy is required. FSW-only machines or hybrid equipment that combines FSW and cutting machines require high capital investment costs, which is one of the reasons why FSW itself has not penetrated the market. Synchronized stir welding, a next-generation joining technology developed by our company, requires no tilt angle and is a very cost-effective method of welding. It is a next-generation joining technology that does not require a tilt angle, does not require a complicated spindle mechanism, and minimizes the load and vibration on the spindle, temperature during joining, and shoulder diameter, thereby enabling a wide range of joining conditions and high-strength, high-speed joining with no joining defects. In synchronized stir welding, the tip of the joining tool is "driven by microwaves" in both the rotational and vertical directions of the tool. The tool is synchronized and stirred in the direction and at the speed required by the material to be stirred in response to the movement required by the material to be welded, enabling welding that exceeds conventional concepts. Conventional FSW is passively stirred by an external driving force, resulting in low joining speeds and high heat input due to the need for a large shoulder diameter. In contrast, SSW is characterized by the fact that materials are actively stirred in synchronization with the direction and speed in which they are to be stirred, resulting in a high joining speed and a small shoulder diameter, which allows joining to be completed with low heat input. The advantages of synchronized stir welding technology in terms of basic mechanical properties are described. The superiority of the basic mechanical properties of SSW over FSW was evaluated as a comparison of the strength of the joint cross section in the comparison between FSW and SSW. SSW, compared to FSW, has tensile strength; base metal 242 MPa/217 MPa after FSW 89%, base metal 242 MPa/225 MPa after SSW 93%. Vickers hardness; base metal 75.0HV/FSW; 57.5HV 76% SSW; 66.0HV 88% (weld center), showing excellent results. In the tensile test, the material used was aluminum (A5052-H112) plate 5 mm thick, and the specimen was dumbbell-shaped, 2 mm thick, 4 mm wide, and 60 mm long. Measurements were made at a loading speed of 20%/min (in accordance with Z 2241:2022). Tensile testing machine: INSTRON Japan, model: INSTRON 5982. Vickers hardness was measured on a 5 mm thick specimen of A5052 tempered H112 with a width of 15 mm at 0.3 pitch (in accordance with JIS Z 2244:2020). Vickers tester: FUTURE-TECH Model: FM-300.

Keywords: FSW, SSW, synchronized stir welding, requires no tilt angles, running peak temperature less than 100 degrees C

Procedia PDF Downloads 24
27 Preliminary Results on a Study of Antimicrobial Susceptibility Testing of Bacillus anthracis Strains Isolated during Anthrax Outbreaks in Italy from 2001 to 2017

Authors: Viviana Manzulli, Luigina Serrecchia, Adelia Donatiello, Valeria Rondinone, Sabine Zange, Alina Tscherne, Antonio Parisi, Antonio Fasanella

Abstract:

Anthrax is a zoonotic disease that affects a wide range of animal species (primarily ruminant herbivores), and can be transmitted to humans through consumption or handling of contaminated animal products. The etiological agent B.anthracis is able to survive in unfavorable environmental conditions by forming endospore which remain viable in the soil for many decades. Furthermore, B.anthracis is considered as one of the most feared agents to be potentially misused as a biological weapon and the importance of the disease and its treatment in humans has been underscored before the bioterrorism events in the United States in 2001. Due to the often fatal outcome of human cases, antimicrobial susceptibility testing plays especially in the management of anthrax infections an important role. In Italy, animal anthrax is endemic (predominantly found in the southern regions and on islands) and is characterized by sporadic outbreaks occurring mainly during summer. Between 2012 and 2017 single human cases of cutaneous anthrax occurred. In this study, 90 diverse strains of B.anthracis, isolated in Italy from 2001 to 2017, were screened to their susceptibility to sixteen clinically relevant antimicrobial agents by using the broth microdilution method. B.anthracis strains selected for this study belong to the strain collection stored at the Anthrax Reference Institute of Italy located inside the Istituto Zooprofilattico Sperimentale of Puglia and Basilicata. The strains were isolated at different time points and places from various matrices (human, animal and environmental). All strains are a representative of over fifty distinct MLVA 31 genotypes. The following antibiotics were used for testing: gentamicin, ceftriaxone, streptomycin, penicillin G, clindamycin, chloramphenicol, vancomycin, linezolid, cefotaxime, tetracycline, erythromycin, rifampin, amoxicillin, ciprofloxacin, doxycycline and trimethoprim. A standard concentration of each antibiotic was prepared in a specific diluent, which were then twofold serial diluted. Therefore, each wells contained: bacterial suspension of 1–5x104 CFU/mL in Mueller-Hinton Broth (MHB), the antibiotic to be tested at known concentration and resazurin, an indicator of cell growth. After incubation overnight at 37°C, the wells were screened for color changes caused by the resazurin: a change from purple to pink/colorless indicated cell growth. The lowest concentration of antibiotic that prevented growth represented the minimal inhibitory concentration (MIC). This study suggests that B.anthracis remains susceptible in vitro to many antibiotics, in addition to doxycycline (MICs ≤ 0,03 µg/ml), ciprofloxacin (MICs ≤ 0,03 µg/ml) and penicillin G (MICs ≤ 0,06 µg/ml), recommend by CDC for the treatment of human cases and for prophylactic use after exposure to the spores. In fact, the good activity of gentamicin (MICs ≤ 0,25 µg/ml), streptomycin (MICs ≤ 1 µg/ml), clindamycin (MICs ≤ 0,125 µg/ml), chloramphenicol(MICs ≤ 4 µg/ml), vancomycin (MICs ≤ 2 µg/ml), linezolid (MICs ≤ 2 µg/ml), tetracycline (MICs ≤ 0,125 µg/ml), erythromycin (MICs ≤ 0,25 µg/ml), rifampin (MICs ≤ 0,25 µg/ml), amoxicillin (MICs ≤ 0,06 µg/ml), towards all tested B.anthracis strains demonstrates an appropriate alternative choice for prophylaxis and/or treatment. All tested B.anthracis strains showed intermediate susceptibility to the cephalosporins (MICs ≥ 16 µg/ml) and resistance to trimethoprim (MICs ≥ 128 µg/ml).

Keywords: Bacillus anthracis, antibiotic susceptibility, treatment, minimum inhibitory concentration

Procedia PDF Downloads 188
26 Towards Achieving Total Decent Work: Occupational Safety and Health Issues, Problems and Concerns of Filipino Domestic Workers

Authors: Ronahlee Asuncion

Abstract:

The nature of their work and employment relationship make domestic workers easy prey to abuse, maltreatment, and exploitation. Considering their plight, this research was conceptualized and examined the: a) level of awareness of Filipino domestic workers on occupational safety and health (OSH); b) their issues/problems/concerns on OSH; c) their intervention strategies at work to address OSH related issues/problems/concerns; d) issues/problems/concerns of government, employers, and non-government organizations with regard to implementation of OSH to Filipino domestic workers; e) the role of government, employers and non-government organizations to help Filipino domestic workers address OSH related issues/problems/concerns; and f) the necessary policy amendments/initiatives/programs to address OSH related issues/problems/concerns of Filipino domestic workers. The study conducted a survey using non-probability sampling, two focus group discussions, two group interviews, and fourteen face-to-face interviews. These were further supplemented with an email correspondence to a key informant based in another country. Books, journals, magazines, and relevant websites further substantiated and enriched data of the research. Findings of the study point to the fact that domestic workers have low level of awareness on OSH because of poor information drive, fragmented implementation of the Domestic Workers Act, inactive campaign at the barangay level, weakened advocacy for domestic workers, absence of law on OSH for domestic workers, and generally low safety culture in the country among others. Filipino domestic workers suffer from insufficient rest, long hours of work, heavy workload, occupational stress, poor accommodation, insufficient hours of sleep, deprivation of day off, accidents and injuries such as cuts, burns, slipping, stumbling, electrical grounding, and fire, verbal, physical and sexual abuses, lack of medical assistance, none provision of personal protective equipment (PPE), absence of knowledge on the proper way of lifting, working at heights, and insufficient food provision. They also suffer from psychological problems because of separation from one’s family, limited mobility in the household where they work, injuries and accidents from using advanced home appliances and taking care of pets, low self-esteem, ergonomic problems, the need to adjust to all household members who have various needs and demands, inability to voice their complaints, drudgery of work, and emotional stress. With regard to illness or health problems, they commonly experience leg pains, back pains, and headaches. In the absence of intervention programs like those offered in the formal employment set up, domestic workers resort to praying, turn to family, relatives and friends for social and emotional support, connect with them through social media like Facebook which also serve as a means of entertainment to them, talk to their employer, and just try to be optimistic about their situation. Promoting OSH for domestic workers is very challenging and complicated because of interrelated factors such as cultural, knowledge, attitudinal, relational, social, resource, economic, political, institutional and legal problems. This complexity necessitates using a holistic and integrated approach as this is not a problem requiring simple solutions. With this recognition comes the full understanding that its success involves the action and cooperation of all duty bearers in attaining decent work for domestic workers.

Keywords: decent work, Filipino domestic workers, occupational safety and health, working conditions

Procedia PDF Downloads 230
25 Precocious Puberty Due to an Autonomous Ovarian Cyst in a 3-Year-Old Girl: Case Report

Authors: Aleksandra Chałupnik, Zuzanna Chilimoniuk, Joanna Borowik, Aleksandra Borkowska, Anna Torres

Abstract:

Background: Precocious puberty is the occurrence of secondary sexual characteristics in girls before the age of 8. The diverse etiology of premature puberty is crucial to determine whether it is true precocious puberty, depending on the activation of the hypothalamic-pituitary-gonadal axis, or pseudo-precocious, which is independent of the activation of this axis. Whatever the cause, premature action of the sex hormones leads to the common symptoms of various forms of puberty. These include the development of sexual characteristics, acne, acceleration of growth rate and acceleration of skeletal maturation. Due to the possible genetic basis of the disorders, an interdisciplinary search for the cause is needed. Case report: The case report concerns a patient of a pediatric gynecology clinic who, at the age of two years, developed advanced thelarhe (M3) and started recurrent vaginal bleeding. In August 2019, gonadotropin suppression initially and after LHRH stimulation and high estradiol levels were reported at the Endocrinology Department. Imaging examinations showed a cyst in the right ovary projection. The bone age was six years. The entire clinical picture indicated pseudo- (peripheral) precocious in the course of ovarian autonomic cyst. In the follow-up ultrasound performed in September, the image of the cyst was stationary and normalization of estradiol levels and clinical symptoms was noted. In December 2019, cyst regression and normal gonadotropin and estradiol concentrations were found. In June 2020, white mucus tinged with blood on the underwear, without any other disturbing symptoms, was observed for several days. Two consecutive USG examinations carried out in the same month confirmed the change in the right ovary, the diameter of which was 25 mm with a very high level of estradiol. Germinal tumor markers were normal. On the Tanner scale, the patient scored M2P1. The labia and hymen had puberty features. The correct vaginal entrance was visible. Another active vaginal bleeding occurred in the first week of July 2020. The considered laparoscopic treatment was abandoned due to the lack of oncological indications. Treatment with Tamoxifen was recommended in July 2020. In the initiating period of treatment, no maturation progression, and even reduction of symptoms, no acceleration of growth and a marked reduction in the size of the cysts were noted. There was no bleeding. After the size of the cyst and hormonal activity increased again, the treatment was changed to Anastrozole, the effect of which led to a reduction in the size of the cyst. Conclusions: The entire clinical picture indicates alleged (peripheral) puberty. Premature puberty in girls, which is manifested as enlarged mammary glands with high levels of estrogens secreted by autonomic ovarian cysts and prepubertal levels of gonadotropins, may indicate McCune-Albright syndrome. Vaginal bleeding may also occur in this syndrome. Cancellation of surgical treatment of the cyst made it impossible to perform a molecular test that would allow to confirm the diagnosis. Taking into account the fact that cysts are often one of the first symptoms of McCune-Albrigt syndrome, it is important to remember about multidisciplinary care for the patient and careful search for skin and bone changes or other hormonal disorders.

Keywords: McCune Albrigth's syndrome, ovarian cyst, pediatric gynaecology, precocious puberty

Procedia PDF Downloads 161
24 Next-Generation Lunar and Martian Laser Retro-Reflectors

Authors: Simone Dell'Agnello

Abstract:

There are laser retroreflectors on the Moon and no laser retroreflectors on Mars. Here we describe the design, construction, qualification and imminent deployment of next-generation, optimized laser retroreflectors on the Moon and on Mars (where they will be the first ones). These instruments are positioned by time-of-flight measurements of short laser pulses, the so-called 'laser ranging' technique. Data analysis is carried out with PEP, the Planetary Ephemeris Program of CfA (Center for Astrophysics). Since 1969 Lunar Laser Ranging (LLR) to Apollo/Lunokhod laser retro-reflector (CCR) arrays supplied accurate tests of General Relativity (GR) and new gravitational physics: possible changes of the gravitational constant Gdot/G, weak and strong equivalence principle, gravitational self-energy (Parametrized Post Newtonian parameter beta), geodetic precession, inverse-square force-law; it can also constraint gravitomagnetism. Some of these measurements also allowed for testing extensions of GR, including spacetime torsion, non-minimally coupled gravity. LLR has also provides significant information on the composition of the deep interior of the Moon. In fact, LLR first provided evidence of the existence of a fluid component of the deep lunar interior. In 1969 CCR arrays contributed a negligible fraction of the LLR error budget. Since laser station range accuracy improved by more than a factor 100, now, because of lunar librations, current array dominate the error due to their multi-CCR geometry. We developed a next-generation, single, large CCR, MoonLIGHT (Moon Laser Instrumentation for General relativity high-accuracy test) unaffected by librations that supports an improvement of the space segment of the LLR accuracy up to a factor 100. INFN also developed INRRI (INstrument for landing-Roving laser Retro-reflector Investigations), a microreflector to be laser-ranged by orbiters. Their performance is characterized at the SCF_Lab (Satellite/lunar laser ranging Characterization Facilities Lab, INFN-LNF, Frascati, Italy) for their deployment on the lunar surface or the cislunar space. They will be used to accurately position landers, rovers, hoppers, orbiters of Google Lunar X Prize and space agency missions, thanks to LLR observations from station of the International Laser Ranging Service in the USA, in France and in Italy. INRRI was launched in 2016 with the ESA mission ExoMars (Exobiology on Mars) EDM (Entry, descent and landing Demonstration Module), deployed on the Schiaparelli lander and is proposed for the ExoMars 2020 Rover. Based on an agreement between NASA and ASI (Agenzia Spaziale Italiana), another microreflector, LaRRI (Laser Retro-Reflector for InSight), was delivered to JPL (Jet Propulsion Laboratory) and integrated on NASA’s InSight Mars Lander in August 2017 (launch scheduled in May 2018). Another microreflector, LaRA (Laser Retro-reflector Array) will be delivered to JPL for deployment on the NASA Mars 2020 Rover. The first lunar landing opportunities will be from early 2018 (with TeamIndus) to late 2018 with commercial missions, followed by opportunities with space agency missions, including the proposed deployment of MoonLIGHT and INRRI on NASA’s Resource Prospectors and its evolutions. In conclusion, we will extend significantly the CCR Lunar Geophysical Network and populate the Mars Geophysical Network. These networks will enable very significantly improved tests of GR.

Keywords: general relativity, laser retroreflectors, lunar laser ranging, Mars geodesy

Procedia PDF Downloads 243
23 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 140
22 Production of Bioethanol from Oil PalmTrunk by Cocktail Carbohydrases Enzyme Produced by Thermophilic Bacteria Isolated from Hot spring in West Sumatera, Indonesia

Authors: Yetti Marlida, Syukri Arif, Nadirman Haska

Abstract:

Recently, alcohol fuels have been produced on industrial scales by fermentation of sugars derived from wheat, corn, sugar beets, sugar cane etc. The enzymatic hydrolysis of cellulosic materials to produce fermentable sugars has an enormous potential in meeting global bioenergy demand through the biorefinery concept, since agri-food processes generate millions of tones of waste each year (Xeros and Christakopoulos 2009) such as sugar cane baggase , wheat straw, rice straw, corn cob, and oil palm trunk. In fact oil palm trunk is one of the most abundant lignocellulosic wastes by-products worldwide especially come from Malaysia, Indonesia and Nigeria and provides an alternative substrate to produce useful chemicals such as bioethanol. Usually, from the ages 3 years to 25 years, is the economical life of oil palm and after that, it is cut for replantation. The size of trunk usually is 15-18 meters in length and 46-60 centimeters in diameter. The trunk after cutting is agricultural waste causing problem in elimination but due to the trunk contains about 42% cellulose, 34.4%hemicellulose, 17.1% lignin and 7.3% other compounds,these agricultural wastes could make value added products (Pumiput, 2006).This research was production of bioethanol from oil palm trunk via saccharafication by cocktail carbohydrases enzymes. Enzymatic saccharification of acid treated oil palm trunk was carried out in reaction mixture containing 40 g treated oil palm trunk in 200 ml 0.1 M citrate buffer pH 4.8 with 500 unit/kg amylase for treatment A: Treatment B: Treatment A + 500 unit/kg cellulose; C: treatment B + 500 unit/kgg xylanase: D: treatment D + 500 unit/kg ligninase and E: OPT without treated + 500 unit/kg amylase + 500 unit/kg cellulose + 500 unit/kg xylanase + 500 unit/kg ligninase. The reaction mixture was incubated on a water bath rotary shaker adjusted to 600C and 75 rpm. The samples were withdraw at intervals 12 and 24, 36, 48,60, and 72 hr. For bioethanol production in biofermentor of 5L the hydrolysis product were inoculated a loop of Saccharomyces cerevisiae and then incubated at 34 0C under static conditions. Samples are withdraw after 12, 24, 36, 48 and 72 hr for bioethanol and residual glucose. The results of the enzymatic hidrolysis (Figure1) showed that the treatment B (OPT hydrolyzed with amylase and cellulase) have optimum condition for glucose production, where was both of enzymes can be degraded OPT perfectly. The same results also reported by Primarini et al., (2012) reported the optimum conditions the hydrolysis of OPT was at concentration of 25% (w /v) with 0.3% (w/v) amylase, 0.6% (w /v) glucoamylase and 4% (w/v) cellulase. In the Figure 2 showed that optimum bioethanol produced at 48 hr after incubation,if time increased the biothanol decreased. According Roukas (1996), a decrease in the concentration of ethanol occur at excess glucose as substrate and product inhibition effects. Substrate concentration is too high reduces the amount of dissolved oxygen, although in very small amounts, oxygen is still needed in the fermentation by Saccaromyces cerevisiae to keep life in high cell concentrations (Nowak 2000, Tao et al. 2005). The results of the research can be conluded that the optimum enzymatic hydrolysis occured when the OPT added with amylase and cellulase and optimum bioethanol produced at 48 hr incubation using Saccharomyses cerevicea whereas 18.08 % bioethanol produced from glucose conversion. This work was funded by Directorate General of Higher Education (DGHE), Ministry of Education and Culture, contract no.245/SP2H/DIT.LimtabMas/II/2013

Keywords: oil palm trunk, enzymatic hydrolysis, saccharification

Procedia PDF Downloads 488
21 Psoriasis Diagnostic Test Development: Exploratory Study

Authors: Salam N. Abdo, Orien L. Tulp, George P. Einstein

Abstract:

The purpose of this exploratory study was to gather the insights into psoriasis etiology, treatment, and patient experience, for developing psoriasis and psoriatic arthritis diagnostic test. Data collection methods consisted of a comprehensive meta-analysis of relevant studies and psoriasis patient survey. Established meta-analysis guidelines were used for the selection and qualitative comparative analysis of psoriasis and psoriatic arthritis research studies. Only studies that clearly discussed psoriasis etiology, treatment, and patient experience were reviewed and analyzed, to establish a qualitative data base for the study. Using the insights gained from meta-analysis, an existing psoriasis patient survey was modified and administered to collect additional data as well as triangulate the results. The hypothesis is that specific types of psoriatic disease have specific etiology and pathophysiologic pattern. The following etiology categories were identified: bacterial, environmental/microbial, genetic, immune, infectious, trauma/stress, and viral. Additional results, obtained from meta-analysis and confirmed by patient survey, were the common age of onset (early to mid-20s) and type of psoriasis (plaque; mild; symmetrical; scalp, chest, and extremities, specifically elbows and knees). Almost 70% of patients reported no prescription drug use due to severe side effects and prohibitive cost. These results will guide the development of psoriasis and psoriatic arthritis diagnostic test. The significant number of medical publications classified psoriatic arthritis disease as inflammatory of an unknown etiology. Thus numerous meta-analyses struggle to report any meaningful conclusions since no definitive results have been reported to date. Therefore, return to the basics is an essential step to any future meaningful results. To date, medical literature supports the fact that psoriatic disease in its current classification could be misidentifying subcategories, which in turn hinders the success of studies conducted to date. Moreover, there has been an enormous commercial support to pursue various immune-modulation therapies, thus following a narrow hypothesis/mechanism of action that is yet to yield resolution of disease state. Recurrence and complications may be considered unacceptable in a significant number of these studies. The aim of the ongoing study is to focus on a narrow subgroup of patient population, as identified by this exploratory study via meta-analysis and patient survey, and conduct an exhaustive work up, aiming at mechanism of action and causality before proposing a cure or therapeutic modality. Remission in psoriasis has been achieved and documented in medical literature, such as immune-modulation, phototherapy, various over-the-counter agents, including salts and tar. However, there is no psoriasis and psoriatic arthritis diagnostic test to date, to guide the diagnosis and treatment of this debilitating and, thus far, incurable disease. Because psoriasis affects approximately 2% of population, the results of this study may affect the treatment and improve the quality of life of a significant number of psoriasis patients, potentially millions of patients in the United States alone and many more millions worldwide.

Keywords: biologics, early diagnosis, etiology, immune disease, immune modulation therapy, inflammation skin disorder, phototherapy, plaque psoriasis, psoriasis, psoriasis classification, psoriasis disease marker, psoriasis diagnostic test, psoriasis marker, psoriasis mechanism of action, psoriasis treatment, psoriatic arthritis, psoriatic disease, psoriatic disease marker, psoriatic patient experience, psoriatic patient quality of life, remission, salt therapy, targeted immune therapy

Procedia PDF Downloads 96
20 Fabrication of Zeolite Modified Cu Doped ZnO Films and Their Response towards Nitrogen Monoxide

Authors: Irmak Karaduman, Tugba Corlu, Sezin Galioglu, Burcu Akata, M. Ali Yildirim, Aytunç Ateş, Selim Acar

Abstract:

Breath analysis represents a promising non-invasive, fast and cost-effective alternative to well-established diagnostic and monitoring techniques such as blood analysis, endoscopy, ultrasonic and tomographic monitoring. Portable, non-invasive, and low-cost breath analysis devices are becoming increasingly desirable for monitoring different diseases, especially asthma. Beacuse of this, NO gas sensing at low concentrations has attracted progressive attention for clinical analysis in asthma. Recently, nanomaterials based sensors are considered to be a promising clinical and laboratory diagnostic tool, because its large surface–to–volume ratio, controllable structure, easily tailored chemical and physical properties, which bring high sensitivity, fast dynamic processand even the increasing specificity. Among various nanomaterials, semiconducting metal oxides are extensively studied gas-sensing materials and are potential sensing elements for breathanalyzer due to their high sensitivity, simple design, low cost and good stability.The sensitivities of metal oxide semiconductor gas sensors can be enhanced by adding noble metals. Doping contents, distribution, and size of metallic or metal oxide catalysts are key parameters for enhancing gas selectivity as well as sensitivity. By manufacturing doping MOS structures, it is possible to develop more efficient sensor sensing layers. Zeolites are perhaps the most widely employed group of silicon-based nanoporous solids. Their well-defined pores of sub nanometric size have earned them the name of molecular sieves, meaning that operation in the size exclusion regime is possible by selecting, among over 170 structures available, the zeolite whose pores allow the pass of the desired molecule, while keeping larger molecules outside.In fact it is selective adsorption, rather than molecular sieving, the mechanism that explains most of the successful gas separations achieved with zeolite membranes. In view of their molecular sieving and selective adsorption properties, it is not surprising that zeolites have found use in a number of works dealing with gas sensing devices. In this study, the Cu doped ZnO nanostructure film was produced by SILAR method and investigated the NO gas sensing properties. To obtain the selectivity of the sample, the gases including CO,NH3,H2 and CH4 were detected to compare with NO. The maximum response is obtained at 85 C for 20 ppb NO gas. The sensor shows high response to NO gas. However, acceptable responses are calculated for CO and NH3 gases. Therefore, there are no responses obtain for H2 and CH4 gases. Enhanced to selectivity, Cu doped ZnO nanostructure film was coated with zeolite A thin film. It is found that the sample possess an acceptable response towards NO hardly respond to CO, NH3, H2 and CH4 at room temperature. This difference in the response can be expressed in terms of differences in the molecular structure, the dipole moment, strength of the electrostatic interaction and the dielectric constant. The as-synthesized thin film is considered to be one of the extremely promising candidate materials in electronic nose applications. This work is supported by The Scientific and Technological Research Council of Turkey (TUBİTAK) under Project No, 115M658 and Gazi University Scientific Research Fund under project no 05/2016-21.

Keywords: Cu doped ZnO, electrical characterization, gas sensing, zeolite

Procedia PDF Downloads 259
19 Person-Centered Thinking as a Fundamental Approach to Improve Quality of Life

Authors: Christiane H. Kellner, Sarah Reker

Abstract:

The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centred design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like centre in Schönbrunn - a large residential complex and service provider for persons with disabilities in the outskirts of Munich - will be remodelled to open up the community to all people as well as transform social space. This strategy should lead to more equal opportunities and open the way for a much more diverse community. The research project “Index for participation development and quality of life for persons with disabilities” (TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at the Franziskuswerk Schönbrunn supports this transformation process called “Vision 2030”. In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees using person-centred planning). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one sub-project more in-depth, namely “The Person-Centred Think Tank” [Arbeitskreis Personenzentriertes Denken; PZD]. In the context of person-centred thinking (PCT), persons with disabilities are encouraged to (re)gain or retain control of their lives through the development of new choice options and the validation of individual lifestyles. PCT should thus foster and support both participation and quality of life. The project aims to establish PCT as a fundamental approach for both employees and persons with disabilities in the institution through in-house training for the staff and, subsequently, training for users. Hence, for the academic support and supervision team, the questions arising from this venture can be summed up as follows: (1) has PCT already gained a foothold at the Franziskuswerk Schönbrunn? And (2) how does it affect the interaction with persons with disabilities and how does it influence the latter’s everyday life? According to the holistic approach described above, the target groups for this study are both the staff and the users of the institution. Initially, we planned to implement the group discussion method for both target-groups. However, in the course of a pretest with persons with intellectual disabilities, it became clear that this type of interview, with hardly any external structuring, provided only limited feedback. In contrast, when the discussions were moderated, there was more interaction and dialogue between the interlocutors. Therefore, for this target-group, we introduced structured group interviews. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. We analysed and evaluated the group interviews and discussions with the help of qualitative content analysis according to Mayring in order to obtain information about users’ quality of life. We sorted out the statements relating to quality of life obtained during the group interviews into three dimensions: subjective wellbeing, self-determination and participation. Nevertheless, the majority of statements were related to subjective wellbeing and self-determination. Thus, especially the limited feedback on participation clearly demonstrates that the lives of most users do not take place beyond the confines of the institution. A number of statements highlighted the fact that PCT is anchored in the everyday interactions within the groups. However, the implementation and fostering of PCT on a broader level could not be detected and thus remain further aims of the project. The additional interviews we have planned should validate the results obtained until now and open up new perspectives.

Keywords: person-centered thinking, research with persons with disabilities, residential complex and service provider, participation, self-determination.

Procedia PDF Downloads 297
18 Multiaxial Stress Based High Cycle Fatigue Model for Adhesive Joint Interfaces

Authors: Martin Alexander Eder, Sergei Semenov

Abstract:

Many glass-epoxy composite structures, such as large utility wind turbine rotor blades (WTBs), comprise of adhesive joints with typically thick bond lines used to connect the different components during assembly. Performance optimization of rotor blades to increase power output by simultaneously maintaining high stiffness-to-low-mass ratios entails intricate geometries in conjunction with complex anisotropic material behavior. Consequently, adhesive joints in WTBs are subject to multiaxial stress states with significant stress gradients depending on the local joint geometry. Moreover, the dynamic aero-elastic interaction of the WTB with the airflow generates non-proportional, variable amplitude stress histories in the material. Empiricism shows that a prominent failure type in WTBs is high cycle fatigue failure of adhesive bond line interfaces, which in fact over time developed into a design driver as WTB sizes increase rapidly. Structural optimization employed at an early design stage, therefore, sets high demands on computationally efficient interface fatigue models capable of predicting the critical locations prone for interface failure. The numerical stress-based interface fatigue model presented in this work uses the Drucker-Prager criterion to compute three different damage indices corresponding to the two interface shear tractions and the outward normal traction. The two-parameter Drucker-Prager model was chosen because of its ability to consider shear strength enhancement under compression and shear strength reduction under tension. The governing interface damage index is taken as the maximum of the triple. The damage indices are computed through the well-known linear Palmgren-Miner rule after separate rain flow-counting of the equivalent shear stress history and the equivalent pure normal stress history. The equivalent stress signals are obtained by self-similar scaling of the Drucker-Prager surface whose shape is defined by the uniaxial tensile strength and the shear strength such that it intersects with the stress point at every time step. This approach implicitly assumes that the damage caused by the prevailing multiaxial stress state is the same as the damage caused by an amplified equivalent uniaxial stress state in the three interface directions. The model was implemented as Python plug-in for the commercially available finite element code Abaqus for its use with solid elements. The model was used to predict the interface damage of an adhesively bonded, tapered glass-epoxy composite cantilever I-beam tested by LM Wind Power under constant amplitude compression-compression tip load in the high cycle fatigue regime. Results show that the model was able to predict the location of debonding in the adhesive interface between the webfoot and the cap. Moreover, with a set of two different constant life diagrams namely in shear and tension, it was possible to predict both the fatigue lifetime and the failure mode of the sub-component with reasonable accuracy. It can be concluded that the fidelity, robustness and computational efficiency of the proposed model make it especially suitable for rapid fatigue damage screening of large 3D finite element models subject to complex dynamic load histories.

Keywords: adhesive, fatigue, interface, multiaxial stress

Procedia PDF Downloads 141
17 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 70
16 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 10
15 Geospatial and Statistical Evidences of Non-Engineered Landfill Leachate Effects on Groundwater Quality in a Highly Urbanised Area of Nigeria

Authors: David A. Olasehinde, Peter I. Olasehinde, Segun M. A. Adelana, Dapo O. Olasehinde

Abstract:

An investigation was carried out on underground water system dynamics within Ilorin metropolis to monitor the subsurface flow and its corresponding pollution. Africa population growth rate is the highest among the regions of the world, especially in urban areas. A corresponding increase in waste generation and a change in waste composition from predominantly organic to non-organic waste has also been observed. Percolation of leachate from non-engineered landfills, the chief means of waste disposal in many of its cities, constitutes a threat to the underground water bodies. Ilorin city, a transboundary town in southwestern Nigeria, is a ready microcosm of Africa’s unique challenge. In spite of the fact that groundwater is naturally protected from common contaminants such as bacteria as the subsurface provides natural attenuation process, groundwater samples have been noted to however possesses relatively higher dissolved chemical contaminants such as bicarbonate, sodium, and chloride which poses a great threat to environmental receptors and human consumption. The Geographic Information System (GIS) was used as a tool to illustrate, subsurface dynamics and the corresponding pollutant indicators. Forty-four sampling points were selected around known groundwater pollutant, major old dumpsites without landfill liners. The results of the groundwater flow directions and the corresponding contaminant transport were presented using expert geospatial software. The experimental results were subjected to four descriptive statistical analyses, namely: principal component analysis, Pearson correlation analysis, scree plot analysis, and Ward cluster analysis. Regression model was also developed aimed at finding functional relationships that can adequately relate or describe the behaviour of water qualities and the hypothetical factors landfill characteristics that may influence them namely; distance of source of water body from dumpsites, static water level of groundwater, subsurface permeability (inferred from hydraulic gradient), and soil infiltration. The regression equations developed were validated using the graphical approach. Underground water seems to flow from the northern portion of Ilorin metropolis down southwards transporting contaminants. Pollution pattern in the study area generally assumed a bimodal pattern with the major concentration of the chemical pollutants in the underground watershed and the recharge. The correlation between contaminant concentrations and the spread of pollution indicates that areas of lower subsurface permeability display a higher concentration of dissolved chemical content. The principal component analysis showed that conductivity, suspended solids, calcium hardness, total dissolved solids, total coliforms, and coliforms were the chief contaminant indicators in the underground water system in the study area. Pearson correlation revealed a high correlation of electrical conductivity for many parameters analyzed. In the same vein, the regression models suggest that the heavier the molecular weight of a chemical contaminant of a pollutant from a point source, the greater the pollution of the underground water system at a short distance. The study concludes that the associative properties of landfill have a significant effect on groundwater quality in the study area.

Keywords: dumpsite, leachate, groundwater pollution, linear regression, principal component

Procedia PDF Downloads 85
14 Triple Immunotherapy to Overcome Immune Evasion by Tumors in a Melanoma Mouse Model

Authors: Mary-Ann N. Jallad, Dalal F. Jaber, Alexander M. Abdelnoor

Abstract:

Introduction: Current evidence confirms that both innate and adaptive immune systems are capable of recognizing and abolishing malignant cells. The emergence of cancerous tumors in patients is, therefore, an indication that certain cancer cells can resist elimination by the immune system through a process known as “immune evasion”. In fact, cancer cells often exploit regulatory mechanisms to escape immunity. Such mechanisms normally exist to control the immune responses and prohibit exaggerated or autoimmune reactions. Recently, immunotherapies have shown promising yet limited results. Therefore this study investigates several immunotherapeutic combinations and devises a triple immunotherapy which harnesses the innate and acquired immune responses towards the annihilation of malignant cells through overcoming their ability of immune evasion, consequently hampering malignant progression and eliminating established tumors. The aims of the study are to rule out acute/chronic toxic effects of the proposed treatment combinations, to assess the effect of these combinations on tumor growth and survival rates, and to investigate potential mechanisms underlying the phenotypic results through analyzing serum levels of anti-tumor cytokines, angiogenic factors and tumor progression indicator, and the tumor-infiltrating immune-cells populations. Methodology: For toxicity analysis, cancer-free C57BL/6 mice are randomized into 9 groups: Group 1 untreated, group 2 treated with sterile saline (solvent of used treatments), group 3 treated with Monophosphoryl-lipid-A, group 4 with anti-CTLA4-antibodies, group 5 with 1-Methyl-Tryptophan (Indolamine-Dioxygenase-1 inhibitor), group 6 with both MPLA and anti-CTLA4-antibodies, group 7 with both MPLA and 1-MT, group 8 with both anti-CTLA4-antibodies and 1-MT, and group 9 with all three: MPLA, anti-CTLA4-antibodies and 1-MT. Mice are monitored throughout the treatment period and for three following months. At that point, histological sections from their main organs are assessed. For tumor progression and survival analysis, a murine melanoma model is generated by injecting analogous mice with B16F10 melanoma cells. These mice are segregated into the listed nine groups. Their tumor size and survival are monitored. For a depiction of underlying mechanisms, melanoma-bearing mice from each group are sacrificed at several time-points. Sera are tested to assess the levels of Interleukin-12 (IL-12), Vascular-Endothelial-Growth Factor (VEGF), and S100B. Furthermore, tumors are excised for analysis of infiltrated immune cell populations including T-cells, macrophages, natural killer cells and immune-regulatory cells. Results: Toxicity analysis shows that all treated groups present no signs of neither acute nor chronic toxicity. Their appearance and weights were comparable to those of control groups throughout the treatment period and for the following 3 months. Moreover, histological sections from their hearts, kidneys, lungs, and livers were normal. Work is ongoing for completion of the remaining study aims. Conclusion: Toxicity was the major concern for the success of the proposed comprehensive combinational therapy. Data generated so far ruled out any acute or chronic toxic effects. Consequently, ongoing work is quite promising and may significantly contribute to the development of more effective immunotherapeutic strategies for the treatment of cancer patients.

Keywords: cancer immunotherapy, check-point blockade, combination therapy, melanoma

Procedia PDF Downloads 95
13 Biomedical Application of Green Biosynthesis Magnetic Iron Oxide (Fe3O4) Nanoparticles Using Seaweed (Sargassum muticum) Aqueous Extract

Authors: Farideh Namvar, Rosfarizan Mohamed

Abstract:

In the field of nanotechnology, the use of various biological units instead of toxic chemicals for the reduction and stabilization of nanoparticles, has received extensive attention. This use of biological entities to create nanoparticles has designated as “Green” synthesis and it is considered to be far more beneficial due to being economical, eco-friendly and applicable for large-scale synthesis as it operates on low pressure, less input of energy and low temperatures. The lack of toxic byproducts and consequent decrease in degradation of the product renders this technique more preferable over physical and classical chemical methods. The variety of biomass having reduction properties to produce nanoparticles makes them an ideal candidate for fabrication. Metal oxide nanoparticles have been said to represent a "fundamental cornerstone of nanoscience and nanotechnology" due to their variety of properties and potential applications. However, this also provides evidence of the fact that metal oxides include many diverse types of nanoparticles with large differences in chemical composition and behaviour. In this study, iron oxide nanoparticles (Fe3O4-NPs) were synthesized using a rapid, single step and completely green biosynthetic method by reduction of ferric chloride solution with brown seaweed (Sargassum muticum) water extract containing polysaccharides as a main factor which acts as reducing agent and efficient stabilizer. Antimicrobial activity against six microorganisms was tested using well diffusion method. The resulting S-IONPs are crystalline in nature, with a cubic shape. The average particle diameter, as determined by TEM, was found to be 18.01 nm. The S-IONPs were efficiently inhibited the growth of Listeria monocytogenes, Escherichia coli and Candida species. Our favorable results suggest that S-IONPs could be a promising candidate for development of future antimicrobial therapies. The nature of biosynthesis and the therapeutic potential by S-IONPs could pave the way for further research on design of green synthesis therapeutic agents, particularly nanomedicine, to deal with treatment of infections. Further studies are needed to fully characterize the toxicity and the mechanisms involved with the antimicrobial activity of these particles. Antioxidant activity of S-IONPs synthesized by green method was measured by ABTS (2, 2'-azino-bis (3-ethylbenzothiazoline-6-sulphonic acid) (IC50= 1000µg) radical scavenging activity. Also, with the increasing concentration of S-IONPs, catalase gene expression compared to control gene GAPDH increased. For anti-angiogenesis study the Ross fertilized eggs were divided into four groups; the control and three experimental groups. The gelatin sponges containing albumin were placed on the chorioalantoic membrane and soaked with different concentrations of S-IONPs. All the cases were photographed using a photo stereomicroscope. The number and the lengths of the vessels were measured using Image J software. The crown rump (CR) and weight of the embryo were also recorded. According to the data analysis, the number and length of the blood vessels, as well as the CR and weight of the embryos reduced significantly compared to the control (p < 0.05), dose dependently. The total hemoglobin was quantified as an indicator of the blood vessel formation, and in the treated samples decreased, which showed its inhibitory effect on angiogenesis.

Keywords: anti-angiogenesis, antimicrobial, antioxidant, biosynthesis, iron oxide (fe3o4) nanoparticles, sargassum muticum, seaweed

Procedia PDF Downloads 294
12 Artificial Intelligence Impact on the Australian Government Public Sector

Authors: Jessica Ho

Abstract:

AI has helped government, businesses and industries transform the way they do things. AI is used in automating tasks to improve decision-making and efficiency. AI is embedded in sensors and used in automation to help save time and eliminate human errors in repetitive tasks. Today, we saw the growth in AI using the collection of vast amounts of data to forecast with greater accuracy, inform decision-making, adapt to changing market conditions and offer more personalised service based on consumer habits and preferences. Government around the world share the opportunity to leverage these disruptive technologies to improve productivity while reducing costs. In addition, these intelligent solutions can also help streamline government processes to deliver more seamless and intuitive user experiences for employees and citizens. This is a critical challenge for NSW Government as we are unable to determine the risk that is brought by the unprecedented pace of adoption of AI solutions in government. Government agencies must ensure that their use of AI complies with relevant laws and regulatory requirements, including those related to data privacy and security. Furthermore, there will always be ethical concerns surrounding the use of AI, such as the potential for bias, intellectual property rights and its impact on job security. Within NSW’s public sector, agencies are already testing AI for crowd control, infrastructure management, fraud compliance, public safety, transport, and police surveillance. Citizens are also attracted to the ease of use and accessibility of AI solutions without requiring specialised technical skills. This increased accessibility also comes with balancing a higher risk and exposure to the health and safety of citizens. On the other side, public agencies struggle with keeping up with this pace while minimising risks, but the low entry cost and open-source nature of generative AI led to a rapid increase in the development of AI powered apps organically – “There is an AI for That” in Government. Other challenges include the fact that there appeared to be no legislative provisions that expressly authorise the NSW Government to use an AI to make decision. On the global stage, there were too many actors in the regulatory space, and a sovereign response is needed to minimise multiplicity and regulatory burden. Therefore, traditional corporate risk and governance framework and regulation and legislation frameworks will need to be evaluated for AI unique challenges due to their rapidly evolving nature, ethical considerations, and heightened regulatory scrutiny impacting the safety of consumers and increased risks for Government. Creating an effective, efficient NSW Government’s governance regime, adapted to the range of different approaches to the applications of AI, is not a mere matter of overcoming technical challenges. Technologies have a wide range of social effects on our surroundings and behaviours. There is compelling evidence to show that Australia's sustained social and economic advancement depends on AI's ability to spur economic growth, boost productivity, and address a wide range of societal and political issues. AI may also inflict significant damage. If such harm is not addressed, the public's confidence in this kind of innovation will be weakened. This paper suggests several AI regulatory approaches for consideration that is forward-looking and agile while simultaneously fostering innovation and human rights. The anticipated outcome is to ensure that NSW Government matches the rising levels of innovation in AI technologies with the appropriate and balanced innovation in AI governance.

Keywords: artificial inteligence, machine learning, rules, governance, government

Procedia PDF Downloads 42
11 Antimicrobial and Antioxidant Activities of Actinobacteria Isolated from the Pollen of Pinus sylvestris Grown on the Lake Baikal Shore

Authors: Denis V. Axenov-Gribanov, Irina V. Voytsekhovskaya, Evgenii S. Protasov, Maxim A. Timofeyev

Abstract:

Isolated ecosystems existing under specific environmental conditions have been shown to be promising sources of new strains of actinobacteria. The taiga forest of Baikal Siberia has not been well studied, and its actinobacterial population remains uncharacterized. The proximity between the huge water mass of Lake Baikal and high mountain ranges influences the structure and diversity of the plant world in Siberia. Here, we report the isolation of eighteen actinobacterial strains from male cones of Pinus sylvestris trees growing on the shore of the ancient Lake Baikal in Siberia. The actinobacterial strains were isolated on solid nutrient MS media and Czapek agar supplemented with cycloheximide and phosphomycin. Identification of actinobacteria was carried out by 16S rRNA gene sequencing and further analysis of the evolutionary history. Four different liquid and solid media (NL19, DNPM, SG and ISP) were tested for metabolite production. The metabolite extracts produced by the isolated strains were tested for antibacterial and antifungal activities. Also, antiradical activity of crude extracts was carried out. Strain Streptomyces sp. IB 2014 I 74-3 that active against Gram-negative bacteria was selected for dereplication analysis with using the high-yield liquid chromatography with mass-spectrometry. Mass detection was performed in both positive and negative modes, with the detection range set to 160–2500 m/z. Data were collected and analyzed using Bruker Compass Data Analysis software, version 4.1. Dereplication was performed using the Dictionary of Natural Products (DNP) database version 6.1 with the following search parameters: accurate molecular mass, absorption spectra and source of compound isolation. Thus, in addition to more common representative strains of Streptomyces, several species belonging to the genera Rhodococcus, Amycolatopsis, and Micromonospora were isolated. Several of the selected strains were deposited in the Russian Collection of Agricultural Microorganisms (RCAM), St. Petersburg, Russia. All isolated strains exhibited antibacterial and antifungal activities. We identified several strains that inhibited the growth of the pathogen Candida albicans but did not hinder the growth of Saccharomyces cerevisiae. Several isolates were active against Gram-positive and Gram-negative bacteria. Moreover, extracts of several strains demonstrated high antioxidant activity. The high proportion of biologically active strains producing antibacterial and specific antifungal compounds may reflect their role in protecting pollen against phytopathogens. Dereplication of the secondary metabolites of the strain Streptomyces sp. IB 2014 I 74-3 was resulted in the fact that a total of 59 major compounds were detected in the culture liquid extract of strain cultivated in ISP medium. Eight compounds were preliminarily identified based on characteristics described in the Dictionary of Natural Products database, using the search parameters Streptomyces sp. IB 2014 I 74-3 was found to produce saframycin A, Y3 and S; 2-amino-3-oxo-3H-phenoxazine-1,8-dicarboxylic acid; galtamycinone; platencin A4-13R and A4-4S; ganefromycin d1; the antibiotic SS 8201B; and streptothricin D, 40-decarbamoyl, 60-carbamoyl. Moreover, forty-nine of the 59 compounds detected in the extract examined in the present study did not result in any positive hits when searching within the DNP database and could not be identified based on available mass-spec data. Thus, these compounds might represent new findings.

Keywords: actinobacteria, Baikal Lake, biodiversity, male cones, Pinus sylvestris

Procedia PDF Downloads 202
10 Biotech Processes to Recover Valuable Fraction from Buffalo Whey Usable in Probiotic Growth, Cosmeceutical, Nutraceutical and Food Industries

Authors: Alberto Alfano, Sergio D’ambrosio, Darshankumar Parecha, Donatella Cimini, Chiara Schiraldi.

Abstract:

The main objective of this study regards the setup of an efficient small-scale platform for the conversion of local renewable waste materials, such as whey, into added-value products, thereby reducing environmental impact and costs deriving from the disposal of processing waste products. The buffalo milk whey derived from the cheese-making process, called second cheese whey, is the main by-product of the dairy industry. Whey is the main and most polluting by-product obtained from cheese manufacturing consisting of lactose, lactic acid, proteins, and salts, making whey an added-value product. In Italy, and in particular, in the Campania region, soft cheese production needs a large volume of liquid waste, especially during late spring and summer. This project is part of a circular economy perspective focused on the conversion of potentially polluting and difficult to purify waste into a resource to be exploited, and it embodies the concept of the three “R”: reduce, recycle, and reuse. Special focus was paid to the production of health-promoting biomolecules and biopolymers, which may be exploited in different segments of the food and pharmaceutical industries. These biomolecules may be recovered through appropriate processes and reused in an attempt to obtain added value products. So, ultrafiltration and nanofiltration processes were performed to fractionate bioactive components starting from buffalo milk whey. In this direction, the present study focused on the implementation of a downstream process that converts waste generated from food and food processing industries into added value products with potential applications. Owing to innovative downstream and biotechnological processes, rather than a waste product may be considered a resource to obtain high added value products, such as food supplements (probiotics), cosmeceuticals, biopolymers, and recyclable purified water. Besides targeting gastrointestinal disorders, probiotics such as Lactobacilli have been reported to improve immunomodulation and protection of the host against infections caused by viral and bacterial pathogens. Interestingly, also inactivated microbial (probiotic) cells and their metabolic products, indicated as parabiotic and postbiotics, respectively, have a crucial role and act as mediators in the modulation of the host’s immune function. To boost the production of biomass (both viable and/or heat inactivated cells) and/or the synthesis of growth-related postbiotics, such as EPS, efficient and sustainable fermentation processes are necessary. Based on a “zero-waste” approach, wastes generated from local industries can be recovered and recycled to develop sustainable biotechnological processes to obtain probiotics as well as post and parabiotic, to be tested as bioactive compounds against gastrointestinal disorders. The results have shown it was possible to recover an ultrafiltration retentate with suitable characteristics to be used in skin dehydration, to perform films (i.e., packaging for food industries), or as a wound repair agent and a nanofiltration retentate to recover lactic acid and carbon sources (e.g., lactose, glucose..) used for microbial cultivation. On the side, the last goal is to obtain purified water that can be reused throughout the process. In fact, water reclamation and reuse provide a unique and viable opportunity to augment traditional water supplies, a key issue nowadays.

Keywords: biotech process, downstream process, probiotic growth, from waste to product, buffalo whey

Procedia PDF Downloads 44
9 PsyVBot: Chatbot for Accurate Depression Diagnosis using Long Short-Term Memory and NLP

Authors: Thaveesha Dheerasekera, Dileeka Sandamali Alwis

Abstract:

The escalating prevalence of mental health issues, such as depression and suicidal ideation, is a matter of significant global concern. It is plausible that a variety of factors, such as life events, social isolation, and preexisting physiological or psychological health conditions, could instigate or exacerbate these conditions. Traditional approaches to diagnosing depression entail a considerable amount of time and necessitate the involvement of adept practitioners. This underscores the necessity for automated systems capable of promptly detecting and diagnosing symptoms of depression. The PsyVBot system employs sophisticated natural language processing and machine learning methodologies, including the use of the NLTK toolkit for dataset preprocessing and the utilization of a Long Short-Term Memory (LSTM) model. The PsyVBot exhibits a remarkable ability to diagnose depression with a 94% accuracy rate through the analysis of user input. Consequently, this resource proves to be efficacious for individuals, particularly those enrolled in academic institutions, who may encounter challenges pertaining to their psychological well-being. The PsyVBot employs a Long Short-Term Memory (LSTM) model that comprises a total of three layers, namely an embedding layer, an LSTM layer, and a dense layer. The stratification of these layers facilitates a precise examination of linguistic patterns that are associated with the condition of depression. The PsyVBot has the capability to accurately assess an individual's level of depression through the identification of linguistic and contextual cues. The task is achieved via a rigorous training regimen, which is executed by utilizing a dataset comprising information sourced from the subreddit r/SuicideWatch. The diverse data present in the dataset ensures precise and delicate identification of symptoms linked with depression, thereby guaranteeing accuracy. PsyVBot not only possesses diagnostic capabilities but also enhances the user experience through the utilization of audio outputs. This feature enables users to engage in more captivating and interactive interactions. The PsyVBot platform offers individuals the opportunity to conveniently diagnose mental health challenges through a confidential and user-friendly interface. Regarding the advancement of PsyVBot, maintaining user confidentiality and upholding ethical principles are of paramount significance. It is imperative to note that diligent efforts are undertaken to adhere to ethical standards, thereby safeguarding the confidentiality of user information and ensuring its security. Moreover, the chatbot fosters a conducive atmosphere that is supportive and compassionate, thereby promoting psychological welfare. In brief, PsyVBot is an automated conversational agent that utilizes an LSTM model to assess the level of depression in accordance with the input provided by the user. The demonstrated accuracy rate of 94% serves as a promising indication of the potential efficacy of employing natural language processing and machine learning techniques in tackling challenges associated with mental health. The reliability of PsyVBot is further improved by the fact that it makes use of the Reddit dataset and incorporates Natural Language Toolkit (NLTK) for preprocessing. PsyVBot represents a pioneering and user-centric solution that furnishes an easily accessible and confidential medium for seeking assistance. The present platform is offered as a modality to tackle the pervasive issue of depression and the contemplation of suicide.

Keywords: chatbot, depression diagnosis, LSTM model, natural language process

Procedia PDF Downloads 35