Search results for: spiking neuron models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6858

Search results for: spiking neuron models

1668 Numerical Simulation of Footing on Reinforced Loose Sand

Authors: M. L. Burnwal, P. Raychowdhury

Abstract:

Earthquake leads to adverse effects on buildings resting on soft soils. Mitigating the response of shallow foundations on soft soil with different methods reduces settlement and provides foundation stability. Few methods such as the rocking foundation (used in Performance-based design), deep foundation, prefabricated drain, grouting, and Vibro-compaction are used to control the pore pressure and enhance the strength of the loose soils. One of the problems with these methods is that the settlement is uncontrollable, leading to differential settlement of the footings, further leading to the collapse of buildings. The present study investigates the utility of geosynthetics as a potential improvement of the subsoil to reduce the earthquake-induced settlement of structures. A steel moment-resisting frame building resting on loose liquefiable dry soil, subjected to Uttarkashi 1991 and Chamba 1995 earthquakes, is used for the soil-structure interaction (SSI) analysis. The continuum model can simultaneously simulate structure, soil, interfaces, and geogrids in the OpenSees framework. Soil is modeled with PressureDependentMultiYield (PDMY) material models with Quad element that provides stress-strain at gauss points and is calibrated to predict the behavior of Ganga sand. The model analyzed with a tied degree of freedom contact reveals that the system responses align with the shake table experimental results. An attempt is made to study the responses of footing structure and geosynthetics with unreinforced and reinforced bases with varying parameters. The result shows that geogrid reinforces shallow foundation effectively reduces the settlement by 60%.

Keywords: settlement, shallow foundation, SSI, continuum FEM

Procedia PDF Downloads 194
1667 [Keynote Talk]: Some Underlying Factors and Partial Solutions to the Global Water Crisis

Authors: Emery Jr. Coppola

Abstract:

Water resources are being depleted and degraded at an alarming and non-sustainable rate worldwide. In some areas, it is progressing more slowly. In other areas, irreversible damage has already occurred, rendering regions largely unsuitable for human existence with destruction of the environment and the economy. Today, 2.5 billion people or 36 percent of the world population live in water-stressed areas. The convergence of factors that created this global water crisis includes local, regional, and global failures. In this paper, a survey of some of these factors is presented. They include abuse of political power and regulatory acquiescence, improper planning and design, ignoring good science and models, systemic failures, and division between the powerful and the powerless. Increasing water demand imposed by exploding human populations and growing economies with short-falls exacerbated by climate change and continuing water quality degradation will accelerate this growing water crisis in many areas. Without regional measures to improve water efficiencies and protect dwindling and vulnerable water resources, environmental and economic displacement of populations and conflict over water resources will only grow. Perhaps more challenging, a global commitment is necessary to curtail if not reverse the devastating effects of climate change. Factors will be illustrated by real-world examples, followed by some partial solutions offered by water experts for helping to mitigate the growing water crisis. These solutions include more water efficient technologies, education and incentivization for water conservation, wastewater treatment for reuse, and improved data collection and utilization.

Keywords: climate change, water conservation, water crisis, water technologies

Procedia PDF Downloads 235
1666 Challenges of Sustainable Development of Small and Medium-Sized Enterprises in Georgia

Authors: Kharaishvili Eteri

Abstract:

The article highlights the importance of small and medium-sized enterprises in achieving the goals of sustainable development of the economy and increasing the well-being of the population. The opinion is put forward that it is necessary to adapt the activities of small and medium-sized firms in Georgia to sustainable business models. Therefore, it is important to identify the challenges that will ensure compliance with the goals and requirements of sustainable development of small and mediumsized enterprises. Objectives. The goal of the study is to reveal the challenges of sustainable development in small and medium-sized enterprises in Georgia and to develop recommendations for strategic development opportunities. Methodologies The challenges of sustainable development of small and medium-sized enterprises are investigated with the following methodology: bibliographic research of scientific works and reports of organizations is carried out; Based on the grouping of sustainable development goals, the performance indicators of these goals are studied; Differences with respect to the corresponding indicators of European countries are determined by the comparison method; The matrix scheme establishes the conditions and tools for sustainable development; Challenges of sustainable development are identified by factor analysis. Contributions Trends in the sustainable development of small and medium-sized enterprises are studied from the point of view of economic, social and environmental factors; To ensure sustainability, the conditions and tools for sustainable development are established (certified supply chains and global markets, allocation of financial resources necessary for sustainable development, proper public procurement, highly qualified workforce, etc.); Several main challenges have been identified in the sustainable development of small and medium-sized enterprises, including: limited internal resources; Institutional factors, especially vague and imperfect regulations, bureaucracy; low level of investments; Low level of qualification of human capital and others.

Keywords: small and medium-sized enterprises, sustainable development, conditions of sustainable development, strategic directions of sustainable development.

Procedia PDF Downloads 105
1665 Anatomical and Histological Characters of Cymbopogon nardus Roots and Its Mutagenic Properties

Authors: Pravaree Phuneerub, Chanida Palanuvej, Nijsiri Ruangrungsi

Abstract:

Cymbopogon nardus Rendel (Family Gramineae) is commonly known as citronella grass. The dried root of C. nardus is used for antipyretic, anti-inflammation, anti-analgesic and anticancer in traditional Thai medicine. Transverse sectional and pulverized C. nardus root were illustrated. The volatile oil was extracted from oil gland by hydrodistillation and analysed by GC/MS. Cymbopogon nardus root was exhaustively extracted by continuously maceration in ethanol and water respectively. The mutagenic and antimutagenic properties of the ethanol extract and fractionated water extract of C. nardus root were evaluated by Ames assay using the S. typhimurium strains TA98 and TA100 as the models. The result indicated that the anatomical character of root transverse section displayed epidermis, parenchyma, oil gland, phloem, xylem vessel, endodermis and pith. Histological characters of root powder showed parenchyma containing oleoresin, parenchyma in longitudinal view, reticulate vessel, annular vessel, starch granules and fragment of fiber. The root volatile oil was rich in sesquiterpenes dominated by elemol (22.87%) and alpha-eudesmol (16.09%). For mutagenic activity, the both extracts of C. nardus were no mutagenic toward S. typhimurium strains TA98 and TA100. Furthermore, the ethanol extract and fractionated water extract of C. nardus root demonstrated strong antimutagenic effect against of nitrite treated 1-aminopyrene to S. typhimurium strains TA98 and TA100. This present investigation suggested that the dried root extract of C. nardus can be further developed as promising antimutagenic agent.

Keywords: Cymbopogon nardus, volatile oil analysis, mutagenic, antimutagenic effect, Ames Salmonella assay

Procedia PDF Downloads 348
1664 Biophysical Features of Glioma-Derived Extracellular Vesicles as Potential Diagnostic Markers

Authors: Abhimanyu Thakur, Youngjin Lee

Abstract:

Glioma is a lethal brain cancer whose early diagnosis and prognosis are limited due to the dearth of a suitable technique for its early detection. Current approaches, including magnetic resonance imaging (MRI), computed tomography (CT), and invasive biopsy for the diagnosis of this lethal disease, hold several limitations, demanding an alternative method. Recently, extracellular vesicles (EVs) have been used in numerous biomarker studies, majorly exosomes and microvesicles (MVs), which are found in most of the cells and biofluids, including blood, cerebrospinal fluid (CSF), and urine. Remarkably, glioma cells (GMs) release a high number of EVs, which are found to cross the blood-brain-barrier (BBB) and impersonate the constituents of parent GMs including protein, and lncRNA; however, biophysical properties of EVs have not been explored yet as a biomarker for glioma. We isolated EVs from cell culture conditioned medium of GMs and regular primary culture, blood, and urine of wild-type (WT)- and glioma mouse models, and characterized by nano tracking analyzer, transmission electron microscopy, immunogold-EM, and differential light scanning. Next, we measured the biophysical parameters of GMs-EVs by using atomic force microscopy. Further, the functional constituents of EVs were examined by FTIR and Raman spectroscopy. Exosomes and MVs-derived from GMs, blood, and urine showed distinction biophysical parameters (roughness, adhesion force, and stiffness) and different from that of regular primary glial cells, WT-blood, and -urine, which can be attributed to the characteristic functional constituents. Therefore, biophysical features can be potential diagnostic biomarkers for glioma.

Keywords: glioma, extracellular vesicles, exosomes, microvesicles, biophysical properties

Procedia PDF Downloads 142
1663 Nonlinear Finite Element Modeling of Deep Beam Resting on Linear and Nonlinear Random Soil

Authors: M. Seguini, D. Nedjar

Abstract:

An accuracy nonlinear analysis of a deep beam resting on elastic perfectly plastic soil is carried out in this study. In fact, a nonlinear finite element modeling for large deflection and moderate rotation of Euler-Bernoulli beam resting on linear and nonlinear random soil is investigated. The geometric nonlinear analysis of the beam is based on the theory of von Kàrmàn, where the Newton-Raphson incremental iteration method is implemented in a Matlab code to solve the nonlinear equation of the soil-beam interaction system. However, two analyses (deterministic and probabilistic) are proposed to verify the accuracy and the efficiency of the proposed model where the theory of the local average based on the Monte Carlo approach is used to analyze the effect of the spatial variability of the soil properties on the nonlinear beam response. The effect of six main parameters are investigated: the external load, the length of a beam, the coefficient of subgrade reaction of the soil, the Young’s modulus of the beam, the coefficient of variation and the correlation length of the soil’s coefficient of subgrade reaction. A comparison between the beam resting on linear and nonlinear soil models is presented for different beam’s length and external load. Numerical results have been obtained for the combination of the geometric nonlinearity of beam and material nonlinearity of random soil. This comparison highlighted the need of including the material nonlinearity and spatial variability of the soil in the geometric nonlinear analysis, when the beam undergoes large deflections.

Keywords: finite element method, geometric nonlinearity, material nonlinearity, soil-structure interaction, spatial variability

Procedia PDF Downloads 414
1662 Identification of Breeding Objectives for Begait Goat in Western Tigray, North Ethiopia

Authors: Hagos Abraham, Solomon Gizaw, Mengistu Urge

Abstract:

A sound breeding objective is the basis for genetic improvement in overall economic merit of farm animals. Begait goat is one of the identified breeds in Ethiopia, which is a multipurpose breed as it serves as source of cash income and source of food (meat and milk). Despite its importance, no formal breeding objectives exist for Begait goat. The objective of the present study was to identify breeding objectives for the breed through two approaches: using own-flock ranking experiment and developing deterministic bio-economic models as a preliminary step towards designing sustainable breeding programs for the breed. In the own-flock ranking experiment, a total of forty five households were visited at their homesteads and were asked to select, with reasons, the first best, second best, third best and the most inferior does from their own flock. Age, previous reproduction and production information of the identified animals were inquired; live body weight and some linear body measurements were taken. The bio-economic model included performance traits (weights, daily weight gain, kidding interval, litter size, milk yield, kid mortality, pregnancy and replacement rates) and economic (revenue and costs) parameters. It was observed that there was close agreement between the farmers’ ranking and bio-economic model results. In general, the results of the present study indicated that Begait goat owners could improve performance of their goats and profitability of their farms by selecting for litter size, six month weight, pre-weaning kid survival rate and milk yield.

Keywords: bio-economic model, economic parameters, own-flock ranking, performance traits

Procedia PDF Downloads 67
1661 Evaluation of Central Nervous System Activity of Synthesized 5, 5-Diphenylimidazolidine-2, 4-Dione Derivatives

Authors: Shweta Verma

Abstract:

Background: Epilepsy is a chronic non-communicable central nervous system (CNS) disorder which affects a large population of all ages. Different classes of drugs are used for the treatment of this neurological disorder, but due to augmented drug resistance and side effects, these drugs become incompetent. Therefore, we design the synthesis of ten new derivatives of Phenytoin. The moiety of Phenytoin was hybridized with different phenols by using three step approach. The synthesized molecules were then investigated for different physicochemical parameters, such as Log P values using diverse software programs and to predict the potential to cross the blood-brain barrier. Objective: The Phenytoin derivatives were designed, synthesized, and characterized to meet the structural necessities indispensable for antiepileptic activity. Method: Firstly, the chloroacetylation of the 5,5-diphenyl hydantoin was carried out, and then various substituted phenols were added to it. The synthesized compounds were characterized and evaluated for antianxiety activity by elevated plus maze method and antiepileptic activity by using subcutaneous pentylenetetrazole (scPTZ) and maximal electroshock (MES) models and neurotoxicity. Result: The number of derivatives of 5,5-diphenyl hydantoin was developed and optimized. The number of parameters was optimized which reveal that the compound containing chloro group such as C3 and C6 showed imperative potential when compared with the standard drug Diazepam. Other compounds containing nitro and methyl group were also found to possess activity. Conclusion: It was summarized that the new compounds of 5,5-diphenyl hydantoin derivatives were synthesized. The results of the data show that the compound containing chloro group is more potent for CNS activity. The new compounds have the probability of being optimized further to engender new scaffolds to treat various CNS disorders.

Keywords: phenytoin, parameters, CNS activity, blood-brain barrier, Log P, CNS active

Procedia PDF Downloads 72
1660 Valorization of a Forest Waste, Modified P-Brutia Cones, by Biosorption of Methyl Geen

Authors: Derradji Chebli, Abdallah Bouguettoucha, Abdelbaki Reffas Khalil Guediri, Abdeltif Amrane

Abstract:

The removal of Methyl Green dye (MG) from aqueous solutions using modified P-brutia cones (PBH and PBN), has been investigated work. The physical parameters such as pH, temperature, initial MG concentration, ionic strength are examined in batch experiments on the sorption of the dye. Adsorption removal of MG was conducted at natural pH 4.5 because the dye is only stable in the range of pH 3.8 to 5. It was observed in experiments that the P-brutia cones treated with NaOH (PBN) exhibited high affinity and adsorption capacity compared to the MG P-brutia cones treated with HCl (PBH) and biosorption capacity of modified P-brutia cones (PBN and PBH) was enhanced by increasing the temperature. This is confirmed by the thermodynamic parameters (ΔG° and ΔH°) which show that the adsorption of MG was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase in the randomness for both adsorbent (PBN and PBH) during the adsorption process. The kinetic model pseudo-first order, pseudo-second order, and intraparticle diffusion coefficient were examined to analyze the sorption process; they showed that the pseudo-second-order model is the one that best describes the adsorption process (MG) on PBN and PBH with a correlation coefficient R²> 0.999. The ionic strength has shown that it has a negative impact on the adsorption of MG on two supports. A reduction of 68.5% of the adsorption capacity for a value Ce=30 mg/L was found for the PBH, while the PBN did not show a significant influence of the ionic strength on adsorption especially in the presence of NaCl. Among the tested isotherm models, the Langmuir isotherm was found to be the most relevant to describe MG sorption onto modified P-brutia cones with a correlation factor R²>0.999. The capacity adsorption of P-brutia cones, was confirmed for the removal of a dye, MG, from aqueous solution. We note also that P-brutia cones is a material very available in the forest and low-cost biomaterial

Keywords: adsorption, p-brutia cones, forest wastes, dyes, isotherm

Procedia PDF Downloads 379
1659 A Phenomenological Approach to Computational Modeling of Analogy

Authors: José Eduardo García-Mendiola

Abstract:

In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.

Keywords: analogy, association, encoding, retrieval

Procedia PDF Downloads 122
1658 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling

Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski

Abstract:

A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: CFD, combustion, ignition, simulation, timing

Procedia PDF Downloads 296
1657 The Optimization of TICSI in the Convergence Mechanism of Urban Water Management

Authors: M. Macchiaroli, L. Dolores, V. Pellecchia

Abstract:

With the recent Resolution n. 580/2019/R/idr, the Italian Regulatory Authority for Energy, Networks, and Environment (ARERA) for the Urban Water Management has introduced, for water managements characterized by persistent critical issues regarding the planning and organization of the service and the implementation of the necessary interventions for the improvement of infrastructures and management quality, a new mechanism for determining tariffs: the regulatory scheme of Convergence. The aim of this regulatory scheme is the overcoming of the Water Service Divided in order to improve the stability of the local institutional structures, technical quality, contractual quality, as well as in order to guarantee transparency elements for Users of the Service. Convergence scheme presupposes the identification of the cost items to be considered in the tariff in parametric terms, distinguishing three possible cases according to the type of historical data available to the Manager. The study, in particular, focuses on operations that have neither data on tariff revenues nor data on operating costs. In this case, the Manager's Constraint on Revenues (VRG) is estimated on the basis of a reference benchmark and becomes the starting point for defining the structure of the tariff classes, in compliance with the TICSI provisions (Integrated Text for tariff classes, ARERA's Resolution n. 665/2017/R/idr). The proposed model implements the recent studies on optimization models for the definition of tariff classes in compliance with the constraints dictated by TICSI in the application of the Convergence mechanism, proposing itself as a support tool for the Managers and the local water regulatory Authority in the decision-making process.

Keywords: decision-making process, economic evaluation of projects, optimizing tools, urban water management, water tariff

Procedia PDF Downloads 119
1656 Double-Spear 1-H2-1 Oncolytic-Immunotherapy for Refractory and Relapsing High-Risk Human Neuroblastoma and Glioma

Authors: Lian Zeng

Abstract:

Double-Spear 1-H2-1 (DS1-H2-1) is an oncolytic virus and an innovative biological drug candidate. The chemical composition of the drug product is a live attenuated West Nile virus (WNV) containing the human T cell costimulator (CD86) gene. After intratumoral injection, the virus can rapidly self-replicate in the injected site and lyse/kill the tumor by repeated infection among tumor cells. We also established xenograft tumor models in mice to evaluate the drug candidate's efficacy on those tumors. The results from preclinical studies on transplanted tumors in immunodeficient mice showed that DS1-H2-1 had significant oncolytic effects on human-origin cancers: it completely (100%) shrieked human glioma; limited human neuroblastoma growth reached as high as 95% growth inhibition rate (%TGITW). The safety data of preclinical animal experiments confirmed that DS1-H2-1 is safe as a biological drug for clinical use. In the preclinical drug efficacy experiment, virus-drug administration with different doses did not show abnormal signs and disease symptoms in more than 300 tested mice, and no side effects or death occurred through various administration routes. Intravenous administration did not cause acute infectious disease or other side effects. However, the replication capacity of the virus in tumor tissue via intravenous administration is only 1% of that of direct intratumoral administration. The direct intratumoral administration of DS1-H2-1 had a higher rate of viral replication. Therefore, choosing direct intratumoral injection can ensure both efficacy and safety.

Keywords: oncolytic virus, WNV-CD86, immunotherapy drugs, glioma, neuroblastoma

Procedia PDF Downloads 132
1655 Implementing 3D Printing for 3D Digital Modeling in the Classroom

Authors: Saritdikhun Somasa

Abstract:

3D printing fabrication has empowered many artists in many fields. Artists who work in stop motion, 3D modeling, toy design, product design, sculpture, and fine arts become one-stop shop operations–where they can design, prototype, and distribute their designs for commercial or fine art purposes. The author has developed a digital sculpting course that fosters digital software, peripheral hardware, and 3D printing with traditional sculpting concept techniques to address the complexities of this multifaceted process, allowing the students to produce complex 3d-printed work. The author will detail the preparation and planning for pre- to post-process 3D printing elements, including software, materials, space, equipment, tools, and schedule consideration for small to medium figurine design statues in a semester-long class. In addition, the author provides insight into teaching challenges in the non-studio space that requires students to work intensively on post-printed models to assemble parts, finish, and refine the 3D printed surface. Even though this paper focuses on the 3D printing processes and techniques for small to medium design statue projects for the Digital Media program, the author hopes the paper will benefit other fields of study such as craft practices, product design, and fine-arts programs. Other schools that might implement 3D printing and fabrication in their programs will find helpful information in this paper, such as a teaching plan, choices of equipment and materials, adaptation for non-studio spaces, and putting together a complete and well-resolved project for students.

Keywords: 3D digital modeling, 3D digital sculpting, 3D modeling, 3D printing, 3D digital fabrication

Procedia PDF Downloads 104
1654 A Virtual Reality Simulation Tool for Reducing the Risk of Building Content during Earthquakes

Authors: Ali Asgary, Haopeng Zhou, Ghassem Tofighi

Abstract:

Use of virtual (VR), augmented reality (AR), and extended reality technologies for training and education has increased in recent years as more hardware and software tools have become available and accessible to larger groups of users. Similarly, the applications of these technologies in earthquake related training and education are on the rise. Several studies have reported promising results for the use of VR and AR for evacuation behaviour and training under earthquake situations. They simulate the impacts that earthquake has on buildings, buildings’ contents, and how building occupants and users can find safe spots or open paths to outside. Considering that considerable number of earthquake injuries and fatalities are linked to the behaviour, our goal is to use these technologies to reduce the impacts of building contents on people. Building on our artificial intelligence (AI) based indoor earthquake risk assessment application that enables users to use their mobile device to assess the risks associated with building contents during earthquakes, we develop a virtual reality application to demonstrate the behavior of different building contents during earthquakes, their associate moving, spreading, falling, and collapsing risks, and their risk mitigation methods. We integrate realistic seismic models, building contents behavior with and without risk mitigation measures in virtual reality environment. The application can be used for training of architects, interior design experts, and building users to enhance indoor safety of the buildings that can sustain earthquakes. This paper describes and demonstrates the application development background, structure, components, and usage.

Keywords: virtual reality, earthquake damage, building content, indoor risks, earthquake risk mitigation, interior design, unity game engine, oculus

Procedia PDF Downloads 105
1653 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques

Authors: Gizem Eser Erdek

Abstract:

This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.

Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet

Procedia PDF Downloads 77
1652 Application of Continuum Damage Concept to Simulation of the Interaction between Hydraulic Fractures and Natural Fractures

Authors: Anny Zambrano, German Gonzalez, Yair Quintero

Abstract:

The continuum damage concept is used to study the interaction between hydraulic fractures and natural fractures, the objective is representing the path and relation among this two fractures types and predict its complex behavior without the need to pre-define their direction as occurs in other finite element applications, providing results more consistent with the physical behavior of the phenomenon. The approach uses finite element simulations through Abaqus software to model damage fracturing, the fracturing process by damage propagation in a rock. The modeling the phenomenon develops in two dimensional (2D) so that the fracture will be represented by a line and the crack front by a point. It considers nonlinear constitutive behavior, finite strain, time-dependent deformation, complex boundary conditions, strain hardening and softening, and strain based damage evolution in compression and tension. The complete governing equations are provided and the method is described in detail to permit readers to replicate all results. The model is compared to models that are published and available. Comparisons are focused in five interactions between natural fractures (NF) and hydraulic fractures: Fractured arrested at NF, crossing NF with or without offset, branching at intersecting NFs, branching at end of NF and NF dilation due to shear slippage. The most significant new finding is, that is not necessary to use pre-defined addresses propagation and stress condition can be evaluated as a dominant factor in the process. This is important because it can model in a more real way the generated complex hydraulic fractures, and be a valuable tool to predict potential problems and different geometries of the fracture network in the process of fracturing due to fluid injection.

Keywords: continuum damage, hydraulic fractures, natural fractures, complex fracture network, stiffness

Procedia PDF Downloads 343
1651 Visual Intelligence: Perception, Image and Manipulation in Visual Communication

Authors: Poojitha Vemula

Abstract:

Understanding how we use image manipulation to communicate through an audience’s perceptions and conceive visual intelligence. With the use of many software and high-end skills, designers have developed a third eye to combine two different visuals and create the desired image by using photoshop and other software skills. The purpose of visual intelligence is to convey a message to the targeted audience. For instance, the images of models are retouched on their skin to make it more convincing and draw attention from the audience. There are many ways of manipulating an image, such as double exposure, retouching photography inks or paint airbrushing and piecing photos together, or enhancing the brightness and contrast. To understand visual intelligence, a questionnaire survey as well as research was conducted on how image manipulation is used by both the audience and the designers. This depends on the message that needs to be conveyed by the brands. For instance, Fair & Lovely, a brightening cream for ladies use a lot of retouching and effects to show the dramatic change the cream takes effect on dark or dusky faces. Thus the designer’s role is to use their third eye to incorporate the message into visuals. The research and questionnaire survey concludes the perceptions and manipulations used in visual communication. However this is all to make an effortless communication between the designer and the audience by using the skills of the designer and the features provided by the software. The objective of visual intelligence is to covet the message of the brands that advertise their products or services by using visuals through softwares. Conveying a message through visual intelligence requires an audiences perceptions and understanding from the visuals created by the artists or designers. Visual intelligence determines how we use our technical skills to retouch and manipulate an image for a better understanding to convey the message to the targeted audience. This also bridges the communication between the brand and the audience.

Keywords: graphic design, visual communication, convey messages, photoshop, image manipulation

Procedia PDF Downloads 219
1650 Digital Athena – Contemporary Commentaries and Greek Mythology Explored through 3D Printing

Authors: Rose Lastovicka, Bernard Guy, Diana Burton

Abstract:

Greek myth and art acted as tools to think with, and a lens through which to explore complex topics as a form of social media. In particular, coins were a form of propaganda to communicate the wealth and power of the city-states they originated from as they circulated from person to person. From this, how can the application of 3D printing technologies explore the infusion of ancient forms with contemporary commentaries to promote discussion? The digital reconstruction of artifacts is a topic that has been researched by various groups all over the globe. Yet, the exploration of Greek myth through artifacts infused with contemporary issues is currently unexplored in this medium. Using the Stratasys J750 3D printer - a multi-material, full-colour 3D printer - a series of coins inspired by ancient Greek currency and myth was created to present commentaries on the adversities surrounding individuals in the LGBT+ community. Using the J750 as the medium for expression allows for complete control and precision of the models to create complex high-resolution iconography. The coins are printed with a hard, translucent material with coloured 3D visuals embedded into the coin to then be viewed in close contact by the audience. These coins as commentaries present an avenue for wider understanding by drawing perspectives not only from sources concerned with the contemporary LGBT+ community but also from sources exploring ancient homosexuality and the perception and regulation of it in antiquity. By displaying what are usually points of contention between anti- and pro-LGBT+ parties, this visual medium opens up a discussion to both parties, suggesting heritage can play a vital interpretative role in the contemporary world.

Keywords: 3D printing, design, Greek mythology, LGBT+ community

Procedia PDF Downloads 116
1649 Comparing Field Displacement History with Numerical Results to Estimate Geotechnical Parameters: Case Study of Arash-Esfandiar-Niayesh under Passing Tunnel, 2.5 Traffic Lane Tunnel, Tehran, Iran

Authors: A. Golshani, M. Gharizade Varnusefaderani, S. Majidian

Abstract:

Underground structures are of those structures that have uncertainty in design procedures. That is due to the complexity of soil condition around. Under passing tunnels are also such affected structures. Despite geotechnical site investigations, lots of uncertainties exist in soil properties due to unknown events. As results, it possibly causes conflicting settlements in numerical analysis with recorded values in the project. This paper aims to report a case study on a specific under passing tunnel constructed by New Austrian Tunnelling Method in Iran. The intended tunnel has an overburden of about 11.3m, the height of 12.2m and, the width of 14.4m with 2.5 traffic lane. The numerical modeling was developed by a 2D finite element program (PLAXIS Version 8). Comparing displacement histories at the ground surface during the entire installation of initial lining, the estimated surface settlement was about four times the field recorded one, which indicates that some local unknown events affect that value. Also, the displacement ratios were in a big difference between the numerical and field data. Consequently, running several numerical back analyses using laboratory and field tests data, the geotechnical parameters were accurately revised to match with the obtained monitoring data. Finally, it was found that usually the values of soil parameters are conservatively low-estimated up to 40 percent by typical engineering judgment. Additionally, it could be attributed to inappropriate constitutive models applied for the specific soil condition.

Keywords: NATM, surface displacement history, numerical back-analysis, geotechnical parameters

Procedia PDF Downloads 194
1648 Access to Sexual Reproductive Health (SRH) Education and Services to Deaf Adolescents in Wakiso, Uganda - The Ugandan Perspective

Authors: Racheal Ayanga, Nancy Katumba Muwangala, Jane Babirye, Harriet Kivumbi

Abstract:

Background: Deaf adolescents are vulnerable. Deafness limits their access to resources that are accessed by their hearing peers. There is minimal attention placed on the SRH needs of persons with disabilities, especially in developing countries. We sought to assess barriers to access of SRH education and services for deaf adolescents in Uganda. Methods: We performed a cross sectional study using a questionnaire on knowledge of and access to SRH education and services from a selected sample of deaf adolescents aged 13-19 years at Wakiso Secondary school for the deaf. A consecutive sample of eligible participants was asked to join the study after obtaining informed consent until the target sample size was reached. Results: From 01 Jul 2022 to 30 Jan 2023, 70 quantitative interviews were conducted. Participants’ mean age was 17 years, and 66% were female. 89% had heard about several components of SRH. 99% reported a need for education and services but had challenges with access 85% of the time. 54% reported receipt of education and services from government or private facilities, and the rest from friends, parents, siblings, teachers and the internet. Conclusion: Government needs to look into availing tailored, sustainable SRH education/services to deaf adolescents at health facilities and teach health workers sign language. SRH education to parents, teachers and communities of deaf adolescents improves access in hard-to-reach areas. Integration of services into routine health care is key in creating and improving models of access to wider communities of persons with disabilities to improve their mental health.

Keywords: sexual and reproductive health, deaf, adolescents, education, services, disabilities, mental health, hard-to-reach areas

Procedia PDF Downloads 85
1647 Statistical Modelling of Maximum Temperature in Rwanda Using Extreme Value Analysis

Authors: Emmanuel Iyamuremye, Edouard Singirankabo, Alexis Habineza, Yunvirusaba Nelson

Abstract:

Temperature is one of the most important climatic factors for crop production. However, severe temperatures cause drought, feverish and cold spells that have various consequences for human life, agriculture, and the environment in general. It is necessary to provide reliable information related to the incidents and the probability of such extreme events occurring. In the 21st century, the world faces a huge number of threats, especially from climate change, due to global warming and environmental degradation. The rise in temperature has a direct effect on the decrease in rainfall. This has an impact on crop growth and development, which in turn decreases crop yield and quality. Countries that are heavily dependent on agriculture use to suffer a lot and need to take preventive steps to overcome these challenges. The main objective of this study is to model the statistical behaviour of extreme maximum temperature values in Rwanda. To achieve such an objective, the daily temperature data spanned the period from January 2000 to December 2017 recorded at nine weather stations collected from the Rwanda Meteorological Agency were used. The two methods, namely the block maxima (BM) method and the Peaks Over Threshold (POT), were applied to model and analyse extreme temperature. Model parameters were estimated, while the extreme temperature return periods and confidence intervals were predicted. The model fit suggests Gumbel and Beta distributions to be the most appropriate models for the annual maximum of daily temperature. The results show that the temperature will continue to increase, as shown by estimated return levels.

Keywords: climate change, global warming, extreme value theory, rwanda, temperature, generalised extreme value distribution, generalised pareto distribution

Procedia PDF Downloads 183
1646 Window Analysis and Malmquist Index for Assessing Efficiency and Productivity Growth in a Pharmaceutical Industry

Authors: Abbas Al-Refaie, Ruba Najdawi, Nour Bata, Mohammad D. AL-Tahat

Abstract:

The pharmaceutical industry is an important component of health care systems throughout the world. Measurement of a production unit-performance is crucial in determining whether it has achieved its objectives or not. This paper applies data envelopment (DEA) window analysis to assess the efficiencies of two packaging lines; Allfill (new) and DP6, in the Penicillin plant in a Jordanian Medical Company in 2010. The CCR and BCC models are used to estimate the technical efficiency, pure technical efficiency, and scale efficiency. Further, the Malmquist productivity index is computed to measure then employed to assess productivity growth relative to a reference technology. Two primary issues are addressed in computation of Malmquist indices of productivity growth. The first issue is the measurement of productivity change over the period, while the second is to decompose changes in productivity into what are generally referred to as a ‘catching-up’ effect (efficiency change) and a ‘frontier shift’ effect (technological change). Results showed that DP6 line outperforms the Allfill in technical and pure technical efficiency. However, the Allfill line outperforms DP6 line in scale efficiency. The obtained efficiency values can guide production managers in taking effective decisions related to operation, management, and plant size. Moreover, both machines exhibit a clear fluctuations in technological change, which is the main reason for the positive total factor productivity change. That is, installing a new Allfill production line can be of great benefit to increasing productivity. In conclusions, the DEA window analysis combined with the Malmquist index are supportive measures in assessing efficiency and productivity in pharmaceutical industry.

Keywords: window analysis, malmquist index, efficiency, productivity

Procedia PDF Downloads 609
1645 Overview of the 2017 Fire Season in Amazon

Authors: Ana C. V. Freitas, Luciana B. M. Pires, Joao P. Martins

Abstract:

In recent years, fire dynamics in deforestation areas of tropical forests have received considerable attention because of their relationship to climate change. Climate models project great increases in the frequency and area of drought in the Amazon region, which may increase the occurrence of fires. This study analyzes the historical record number of fire outbreaks in 2017 using satellite-derived data sets of active fire detections, burned area, precipitation, and data of the Fire Program from the Center for Weather Forecasting and Climate Studies (CPTEC/INPE). A downward trend in the number of fire outbreaks occurred in the first half of 2017, in relation to the previous year. This decrease can be related to the fact that 2017 was not an El Niño year and, therefore, the observed rainfall and temperature in the Amazon region was close to normal conditions. Meanwhile, the worst period in history for fire outbreaks began with the subsequent arrival of the dry season. September of 2017 exceeded all monthly records for number of fire outbreaks per month in the entire series. This increase was mainly concentrated in Bolivia and in the states of Amazonas, northeastern Pará, northern Rondônia and Acre, regions with high densities of rural settlements, which strongly suggests that human action is the predominant factor, aggravated by the lack of precipitation during the dry season allowing the fires to spread and reach larger areas. Thus, deforestation in the Amazon is primarily a human-driven process: climate trends may be providing additional influences.

Keywords: Amazon forest, climate change, deforestation, human-driven process, fire outbreaks

Procedia PDF Downloads 128
1644 Injection of Bradykinin in Femoral Artery Elicits Cardiorespiratory Reflexes Involving Perivascular Afferents in Rat Models

Authors: Sanjeev K. Singh, Maloy B. Mandal, Revand R.

Abstract:

The physiology of baroreceptors and chemoreceptors present in large blood vessels of the heart is well known in regulation of cardiorespiratory functions. Since large blood vessels and peripheral blood vessels are of same mesodermal origin, therefore, involvement of the latter in regulation of cardiorespiratory system is expected. Role of perivascular nerves in mediating cardiorespiratory alterations produced after intra-arterial injection of a nociceptive agent (bradykinin) was examined in urethane anesthetized male rats. Respiratory frequency, blood pressure, and heart rate were recorded for 30 min after the retrograde injection of bradykinin/saline in the femoral artery. In addition, paw edema was determined and water content was expressed as percentage of wet weight. Injection of bradykinin produced immediate tachypnoeic, hypotensive and bradycardiac responses of shorter latency (5-8 s) favoring the neural mechanisms involved in it. Injection of equi-volume of saline did not produce any responses and served as time matched control. Paw edema was observed in the ipsilateral hind limb. Pretreatment with diclofenac sodium significantly attenuated the bradykinin-induced responses and also blocked the paw edema. Ipsilateral femoral and sciatic nerve sectioning attenuated bradykinin-induced responses significantly indicating the origin of responses from the local vascular bed. Administration of bradykinin in the segment of an artery produced reflex cardiorespiratory changes by stimulating the perivascular nociceptors involving prostaglandins. This is a novel study exhibiting the role of peripheral blood vessels in regulation of cardiorespiratory system.

Keywords: vasosensory reflex, cardiorespiratory changes, nociceptive agent, bradykinin, VR1 receptors

Procedia PDF Downloads 148
1643 Predictions of Thermo-Hydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings, abbreviated as GFBs, are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional, abbreviated as 3D, fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.

Keywords: fluid-structure interaction, multi-physics simulations, gas foil bearing, oil-free, transient thermo-hydrodynamic

Procedia PDF Downloads 163
1642 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 81
1641 Digitizing Masterpieces in Italian Museums: Techniques, Challenges and Consequences from Giotto to Caravaggio

Authors: Ginevra Addis

Abstract:

The possibility of reproducing physical artifacts in a digital format is one of the opportunities offered by the technological advancements in information and communication most frequently promoted by museums. Indeed, the study and conservation of our cultural heritage have seen significant advancement due to the three-dimensional acquisition and modeling technology. A variety of laser scanning systems has been developed, based either on optical triangulation or on time-of-flight measurement, capable of producing digital 3D images of complex structures with high resolution and accuracy. It is necessary, however, to explore the challenges and opportunities that this practice brings within museums. The purpose of this paper is to understand what change is introduced by digital techniques in those museums that are hosting digital masterpieces. The methodology used will investigate three distinguished Italian exhibitions, related to the territory of Milan, trying to analyze the following issues about museum practices: 1) how digitizing art masterpieces increases the number of visitors; 2) what the need that calls for the digitization of artworks; 3) which techniques are most used; 4) what the setting is; 5) the consequences of a non-publication of hard copies of catalogues; 6) envision of these practices in the future. Findings will show how interconnection plays an important role in rebuilding a collection spread all over the world. Secondly how digital artwork duplication and extension of reality entail new forms of accessibility. Thirdly, that collection and preservation through digitization of images have both a social and educational mission. Fourthly, that convergence of the properties of different media (such as web, radio) is key to encourage people to get actively involved in digital exhibitions. The present analysis will suggest further research that should create museum models and interaction spaces that act as catalysts for innovation.

Keywords: digital masterpieces, education, interconnection, Italian museums, preservation

Procedia PDF Downloads 175
1640 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 302
1639 Deuterium Effect on the Growth of the Fungus Aspergillus Fumigatus and Candida Albicans

Authors: Farzad Doostishoar, Abdolreza Hasanzadeh, Seyed Amin Ayatolahi Mousavi

Abstract:

Introduction and Goals: Deuterium has different action from its isotopes hydrogen in chemical reactions and biochemical processes. It is not a significant difference in heavier atoms between the behavior of heavier isotope and the lighter One but for very lighter atoms it is significant . According to that most of the weight of all creatures body is water natural rate can be significant. In this article we want to study the effect of reduced deuterium on the fungus cell. If we saw the dependence of deuterium concentration of environment on the cells growth we can test this in invivo models too. Methods: First we measured deuterium concentration of the distillated water this analyze was operated by Arak’s heavy water company. Then the deuterium was diluted to ½ ¼ 1/8 1/16 by adding water free of deuterium for making media. In tree of samples the deuterium concentration was increased by adding D2O up to 10,50,100 times more concentrated. For candida albicans growth we used sabor medium and for aspergillus fomigatis growth we used sabor medium containing chloramphenicol. After culturing the funguses species we put the mediums for each species in the shaker incubator for 10 days in 25 centigrade. In different days and times the plates were studied morphologically and some microscopic characteristics were studied too. This experiments and cultures were repeated 3 times. Results: Statistical analyzes by paired-sample T test showed that aspergilus fomigatoos growth was decreased in concentration of 72 ppm( half deuterium concentration of negative control) significantly. In deuterium concentration reduction the growth reduce into the negative control significantly. The project results showed that candida albicans was sensitive to reduce and decrease of the deuterium in all concentrations.

Keywords: deuterium, cancer cell, growth, candida albicans

Procedia PDF Downloads 401