Search results for: discrete feature vector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3055

Search results for: discrete feature vector

85 Decorative Plant Motifs in Traditional Art and Craft Practices: Pedagogical Perspectives

Authors: Geetanjali Sachdev

Abstract:

This paper explores the decorative uses of plant motifs and symbols in traditional Indian art and craft practices in order to assess their pedagogical significance within the context of plant study in higher education in art and design. It examines existing scholarship on decoration and plants in Indian art and craft practices. The impulse to elaborate upon an existing form or surface is an intrinsic part of many Indian traditional art and craft traditions where a deeply ingrained love for decoration exists. Indian craftsmen use an array of motifs and embellishments to adorn surfaces across a range of practices, and decoration is widely seen in textiles, jewellery, temple sculptures, vehicular art, architecture, and various other art, craft, and design traditions. Ornamentation in Indian cultural traditions has been attributed to religious and spiritual influences in the lives of India’s art and craft practitioners. Through adornment, surfaces and objects were ritually transformed to function both spiritually and physically. Decorative formations facilitate spiritual development and attune our minds to concepts that support contemplation. Within practices of ornamentation and adornment, there is extensive use of botanical motifs as Indian art and craft practitioners have historically been drawn towards nature as a source of inspiration. This is due to the centrality of agriculture in the lives of Indian people as well as in religion, where plants play a key role in religious rituals and festivals. Plant representations thus abound in two-dimensional and three-dimensional surface designs and patterns where the motifs range from being realistic, highly stylized, and curvilinear forms to geometric and abstract symbols. Existing scholarship reveals that these botanical embellishments reference a wide range of plants that include native and non-indigenous plants, as well as imaginary and mythical plants. Structural components of plant anatomy, such as leaves, stems, branches and buds, and flowers, are part of the repertoire of design motifs used, as are plant forms indicating different stages of growth, such as flowering buds and flowers in full bloom. Symmetry is a characteristic feature, and within the decorative register of various practices, plants are part of border zones and bands, connecting corners and all-over patterns, used as singular motifs and floral sprays on panels, and as elements within ornamental scenes. The results of the research indicate that decoration as a mode of inquiry into plants can serve as a platform to learn about local and global biodiversity and plant anatomy and develop artistic modes of thinking symbolically, metaphorically, imaginatively, and relationally about the plant world. The conclusion is drawn that engaging with ornamental modes of plant representation in traditional Indian art and craft practices is pedagogically significant for two reasons. Decoration as a mode of engagement cultivates both botanical and artistic understandings of plants. It also links learners with the indigenous art and craft traditions of their own culture.

Keywords: art and design pedagogy, decoration, plant motifs, traditional art and craft

Procedia PDF Downloads 61
84 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 265
83 Use of Pheromones, Active Surveillance and Treated Cattle to Prevent the Establishment of the Tropical Bont Tick in Puerto Rico and the Americas

Authors: Robert Miller, Fred Soltero, Sandra Allan, Denise Bonilla

Abstract:

The Tropical Bont Tick (TBT), Amblyomma variegatum, was introduced to the Caribbean in the mid-1700s. Since it has spread throughout the Caribbean dispersed by cattle egrets (Bubulcus ibis). Tropical Bont Ticks vector many pathogens to livestock and humans. However, only the livestock diseases heartwater, Ehrlichia (Cowdria) ruminantium, and dermatophilosis, Dermatophilus congolensis, are associated with TBT in the Caribbean. African tick bite fever (Rickettsia africae) is widespread in Caribbean TBT but human cases are rare. The Caribbean Amblyomma Programme (CAP) was an effort led by the Food and Agricultural Organization to eradicate TBTs from participating islands. This 10-year effort successfully eradicated TBT from many islands. However, most are reinfested since its termination. Pheromone technology has been developed to aid in TBT control. Although not part of the CAP treatment scheme, this research established that pheromones in combination with pesticide greatly improves treatment efficiencies. Additionally, pheromone combined with CO₂ traps greatly improves active surveillance success. St. Croix has a history of TBT outbreaks. Passive surveillance detected outbreaks in 2016 and in May of 2021. Surveillance efforts are underway to determine the extent of TBT on St Croix. Puerto Rico is the next island in the archipelago and is at a greater risk of re-infestation due to active outbreaks in St Croix. Tropical Bont Ticks were last detected in Puerto Rico in the 1980s. The infestation started on the small Puerto Rican island of Vieques, the closest landmass to St Croix, and spread to the main island through cattle movements. This infestation was eradicated with the help of the Tropical Cattle Tick (TCT), Rhipicephalus (Boophilus) microplus, eradication program. At the time, large percentages of Puerto Rican cattle were treated for ticks along with the necessary material and manpower mobilized for the effort. Therefore, a shift of focus from the TCT to TBT prevented its establishment in Puerto Rico. Currently, no large-scale treatment of TCTs occurs in Puerto Rico. Therefore, the risk of TBT establishment is now greater than it was in the 1980s. From Puerto Rico, the risk of TBT movement to the American continent increases significantly. The establishment of TBTs in the Americas would cause $1.2 billion USD in losses to the livestock industry per year. The USDA Agricultural Research Service recently worked with the USDA Animal Health Inspection Service and the Puerto Rican Department of Agriculture to modernize the management of the TCT. This modernized program uses safer pesticides and has successfully been used to eradicate pesticide-susceptible and -resistant ticks throughout the island. The objective of this work is to prevent the infestation of Puerto Rico by TBTs by combining the current TCT management efforts with TBT surveillance in Vieques. The combined effort is designed to eradicate TCT from Vieques while using the treated cattle as trap animals for TBT using pheromone impregnated tail tags attached to treated animals. Additionally, active surveillance using CO₂-baited traps combined with pheromone will be used to actively survey the environment for free-living TBT. Knowledge gained will inform TBT control efforts in St. Croix.

Keywords: Amblyomma variegatum, caribbean, eradication, Rhipicephalus (boophilus) microplus, pheromone

Procedia PDF Downloads 147
82 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 105
81 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers

Authors: Margarita Dufresne

Abstract:

This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.

Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel

Procedia PDF Downloads 33
80 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal

Authors: Pedro B. Antunes, Paulo J. Ramísio

Abstract:

Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.

Keywords: coastal zones, monitoring, road runoff pollution, salt deposition

Procedia PDF Downloads 214
79 Characteristics of Female Offenders: Using Childhood Victimization Model for Treatment

Authors: Jane E. Hill

Abstract:

Sexual, physical, or emotional abuses are behaviors used by one person in a relationship or within a family unit to control the other person. Physical abuse can consist of, but not limited to hitting, pushing, and shoving. Sexual abuse is unwanted or forced sexual activity on a person without their consent. Abusive behaviors include intimidation, manipulation, humiliation, isolation, frightening, terrorizing, coercing, threatening, blaming, hurting, injuring, or wounding another individual. Although emotional, psychological and financial abuses are not criminal behaviors, they are forms of abuse and can leave emotional scars on their victim. The purpose of this literature review research was to examine characteristics of female offenders, past abuse, and pathways to offending. The question that guided this research: does past abuse influence recidivism? The theoretical foundation used was relational theory by Jean Baker Miller. One common feature of female offenders is abuse (sexual, physical, or verbal). Abuse can cause mental illnesses and substance abuse. The abuse does not directly affect the women's recidivism. However, results indicated the psychological and maladaptive behaviors as a result of the abuse did contribute to indirect pathways to continue offending. The female offenders’ symptoms of ongoing depression, anxiety, and engaging in substance abuse (self medicating) did lead to the women's incarceration. Using the childhood victimization model as the treatment approach for women's mental illness and substance abuse disorders that were a result from history of child abuse have shown success. With that in mind, if issues surrounding early victimization are not addressed, then the women offenders may not recover from their mental illness or addiction and are at a higher risk of reoffending. However, if the women are not emotionally ready to engage in the treatment process, then it should not be forced onto them because it may cause harm (targeting prior traumatic experiences). Social capital is family support and sources that assist in helping the individual with education, employment opportunities that can lead to success. Human capital refers to internal knowledge, skills, and capacities that help the individual act in new and appropriate ways. The lack of human and social capital is common among female offenders, which leads to extreme poverty and economic marginalization, more often in frequent numbers than men. In addition, the changes in welfare reform have exacerbated women’s difficulties in gaining adequate-paying jobs to support themselves and their children that have contributed to female offenders reoffending. With that in mind, one way to lower the risk factor of female offenders from reoffending is to provide them with educational and vocational training, enhance their self-efficacy, and teach them appropriate coping skills and life skills. Furthermore, it is important to strengthen family bonds and support. Having a supportive family relationship was a statistically significant protective factor for women offenders.

Keywords: characteristics, childhood victimization model, female offenders, treatment

Procedia PDF Downloads 88
78 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 332
77 Mathematical Toolbox for editing Equations and Geometrical Diagrams and Graphs

Authors: Ayola D. N. Jayamaha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Currently there are lot of educational tools designed for mathematics. Open source software such as GeoGebra and Octave are bulky in their architectural structure. In addition, there is MathLab software, which facilitates much more than what we ask for. Many of the computer aided online grading and assessment tools require integrating editors to their software. However, there are not exist suitable editors that cater for all their needs in editing equations and geometrical diagrams and graphs. Some of the existing software for editing equations is Alfred’s Equation Editor, Codecogs, DragMath, Maple, MathDox, MathJax, MathMagic, MathFlow, Math-o-mir, Microsoft Equation Editor, MiraiMath, OpenOffice, WIRIS Editor and MyScript. Some of them are commercial, open source, supports handwriting recognition, mobile apps, renders MathML/LaTeX, Flash / Web based and javascript display engines. Some of the diagram editors are GeoKone.NET, Tabulae, Cinderella 1.4, MyScript, Dia, Draw2D touch, Gliffy, GeoGebra, Flowchart, Jgraph, JointJS, J painter Online diagram editor and 2D sketcher. All these software are open source except for MyScript and can be used for editing mathematical diagrams. However, they do not fully cater the needs of a typical computer aided assessment tool or Educational Platform for Mathematics. This solution provides a Web based, lightweight, easy to implement and integrate solution of an html5 canvas that renders on all of the modern web browsers. The scope of the project is an editor that covers equations and mathematical diagrams and drawings on the O/L Mathematical Exam Papers in Sri Lanka. Using the tool the students can enter any equation to the system which can be on an online remote learning platform. The users can also create and edit geometrical drawings, graphs and do geometrical constructions that require only Compass and Ruler from the Editing Interface provided by the Software. The special feature of this software is the geometrical constructions. It allows the users to create geometrical constructions such as angle bisectors, perpendicular lines, angles of 600 and perpendicular bisectors. The tool correctly imitates the functioning of rulers and compasses to create the required geometrical construction. Therefore, the users are able to do geometrical drawings on the computer successfully and we have a digital format of the geometrical drawing for further processing. Secondly, we can create and edit Venn Diagrams, color them and label them. In addition, the students can draw probability tree diagrams and compound probability outcome grids. They can label and mark regions within the grids. Thirdly, students can draw graphs (1st order and 2nd order). They can mark points on a graph paper and the system connects the dots to draw the graph. Further students are able to draw standard shapes such as circles and rectangles by selecting points on a grid or entering the parametric values.

Keywords: geometrical drawings, html5 canvas, mathematical equations, toolbox

Procedia PDF Downloads 354
76 Sheep Pox Virus Recombinant Proteins To Develop Subunit Vaccines

Authors: Olga V. Chervyakova, Elmira T. Tailakova, Vitaliy M. Strochkov, Kulyaisan T. Sultankulova, Nurlan T. Sandybayev, Lev G. Nemchinov, Rosemarie W. Hammond

Abstract:

Sheep pox is a highly contagious infection that OIE regards to be one of the most dangerous animal diseases. It causes enormous economic losses because of death and slaughter of infected animals, lower productivity, cost of veterinary and sanitary as well as quarantine measures. To control spread of sheep pox infection the attenuated vaccines are widely used in the Republic of Kazakhstan and other Former Soviet Union countries. In spite of high efficiency of live vaccines, the possible presence of the residual virulence, potential genetic instability restricts their use in disease-free areas that leads to necessity to exploit new approaches in vaccine development involving recombinant DNA technology. Vaccines on the basis of recombinant proteins are the newest generation of prophylactic preparations. The main advantage of these vaccines is their low reactogenicity and this fact makes them widely used in medical and veterinary practice for vaccination of humans and farm animals. The objective of the study is to produce recombinant immunogenic proteins for development of the high-performance means for sheep pox prophylaxis. The SPV proteins were chosen for their homology with the known immunogenic vaccinia virus proteins. Assay of nucleotide and amino acid sequences of the target SPV protein genes. It has been shown that four proteins SPPV060 (ortholog L1), SPPV074 (ortholog H3), SPPV122 (ortholog A33) and SPPV141 (ortholog B5) possess transmembrane domains at N- or C-terminus while in amino acid sequences of SPPV095 (ortholog А 4) and SPPV117 (ortholog А 27) proteins these domains were absent. On the basis of these findings the primers were constructed. Target genes were amplified and subsequently cloned into the expression vector рЕТ26b(+) or рЕТ28b(+). Six constructions (pSPPV060ΔТМ, pSPPV074ΔТМ, pSPPV095, pSPPV117, pSPPV122ΔТМ and pSPPV141ΔТМ) were obtained for expression of the SPV genes under control of T7 promoter in Escherichia coli. To purify and detect recombinant proteins the amino acid sequences were modified by adding six histidine molecules at C-terminus. Induction of gene expression by IPTG was resulted in production of the proteins with molecular weights corresponding to the estimated values for SPPV060, SPPV074, SPPV095, SPPV117, SPPV122 and SPPV141, i.e. 22, 30, 20, 19, 17 and 22 kDa respectively. Optimal protocol of expression for each gene that ensures high yield of the recombinant protein was identified. Assay of cellular lysates by western blotting confirmed expression of the target proteins. Recombinant proteins bind specifically with antibodies to polyhistidine. Moreover all produced proteins are specifically recognized by the serum from experimentally SPV-infected sheep. The recombinant proteins SPPV060, SPPV074, SPPV117, SPPV122 and SPPV141 were also shown to induce formation of antibodies with virus-neutralizing activity. The results of the research will help to develop a new-generation high-performance means for specific sheep pox prophylaxis that is one of key moments in animal health protection. The research was conducted under the International project ISTC # K-1704 “Development of methods to construct recombinant prophylactic means for sheep pox with use of transgenic plants” and under the Grant Project RK MES G.2015/0115RK01983 "Recombinant vaccine for sheep pox prophylaxis".

Keywords: prophylactic preparation, recombinant protein, sheep pox virus, subunit vaccine

Procedia PDF Downloads 221
75 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 226
74 The Role of Supply Chain Agility in Improving Manufacturing Resilience

Authors: Maryam Ziaee

Abstract:

This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.

Keywords: agility, manufacturing, resilience, supply chain

Procedia PDF Downloads 71
73 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 46
72 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 29
71 Quantification of Magnetic Resonance Elastography for Tissue Shear Modulus using U-Net Trained with Finite-Differential Time-Domain Simulation

Authors: Jiaying Zhang, Xin Mu, Chang Ni, Jeff L. Zhang

Abstract:

Magnetic resonance elastography (MRE) non-invasively assesses tissue elastic properties, such as shear modulus, by measuring tissue’s displacement in response to mechanical waves. The estimated metrics on tissue elasticity or stiffness have been shown to be valuable for monitoring physiologic or pathophysiologic status of tissue, such as a tumor or fatty liver. To quantify tissue shear modulus from MRE-acquired displacements (essentially an inverse problem), multiple approaches have been proposed, including Local Frequency Estimation (LFE) and Direct Inversion (DI). However, one common problem with these methods is that the estimates are severely noise-sensitive due to either the inverse-problem nature or noise propagation in the pixel-by-pixel process. With the advent of deep learning (DL) and its promise in solving inverse problems, a few groups in the field of MRE have explored the feasibility of using DL methods for quantifying shear modulus from MRE data. Most of the groups chose to use real MRE data for DL model training and to cut training images into smaller patches, which enriches feature characteristics of training data but inevitably increases computation time and results in outcomes with patched patterns. In this study, simulated wave images generated by Finite Differential Time Domain (FDTD) simulation are used for network training, and U-Net is used to extract features from each training image without cutting it into patches. The use of simulated data for model training has the flexibility of customizing training datasets to match specific applications. The proposed method aimed to estimate tissue shear modulus from MRE data with high robustness to noise and high model-training efficiency. Specifically, a set of 3000 maps of shear modulus (with a range of 1 kPa to 15 kPa) containing randomly positioned objects were simulated, and their corresponding wave images were generated. The two types of data were fed into the training of a U-Net model as its output and input, respectively. For an independently simulated set of 1000 images, the performance of the proposed method against DI and LFE was compared by the relative errors (root mean square error or RMSE divided by averaged shear modulus) between the true shear modulus map and the estimated ones. The results showed that the estimated shear modulus by the proposed method achieved a relative error of 4.91%±0.66%, substantially lower than 78.20%±1.11% by LFE. Using simulated data, the proposed method significantly outperformed LFE and DI in resilience to increasing noise levels and in resolving fine changes of shear modulus. The feasibility of the proposed method was also tested on MRE data acquired from phantoms and from human calf muscles, resulting in maps of shear modulus with low noise. In future work, the method’s performance on phantom and its repeatability on human data will be tested in a more quantitative manner. In conclusion, the proposed method showed much promise in quantifying tissue shear modulus from MRE with high robustness and efficiency.

Keywords: deep learning, magnetic resonance elastography, magnetic resonance imaging, shear modulus estimation

Procedia PDF Downloads 37
70 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 110
69 Field-Testing a Digital Music Notebook

Authors: Rena Upitis, Philip C. Abrami, Karen Boese

Abstract:

The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.

Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction

Procedia PDF Downloads 231
68 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India

Authors: Shubhangi Roy

Abstract:

Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.

Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms

Procedia PDF Downloads 142
67 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process

Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc

Abstract:

In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.

Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown

Procedia PDF Downloads 331
66 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization

Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo

Abstract:

Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.

Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy

Procedia PDF Downloads 126
65 Creating Moments and Memories: An Evaluation of the Starlight 'Moments' Program for Palliative Children, Adolescents and Their Families

Authors: C. Treadgold, S. Sivaraman

Abstract:

The Starlight Children's Foundation (Starlight) is an Australian non-profit organisation that delivers programs, in partnership with health professionals, to support children, adolescents, and their families who are living with a serious illness. While supporting children and adolescents with life-limiting conditions has always been a feature of Starlight's work, providing a dedicated program, specifically targeting and meeting the needs of the paediatric palliative population, is a recent area of focus. Recognising the challenges in providing children’s palliative services, Starlight initiated a research and development project to better understand and meet the needs of this group. The aim was to create a program which enhances the wellbeing of children, adolescents, and their families receiving paediatric palliative care in their community through the provision of on-going, tailored, positive experiences or 'moments'. This paper will present the results of the formative evaluation of this unique program, highlighting the development processes and outcomes of the pilot. The pilot was designed using an innovation methodology, which included a number of research components. There was a strong belief that it needed to be delivered in partnership with a dedicated palliative care team, helping to ensure the best interests of the family were always represented. This resulted in Starlight collaborating with both the Victorian Paediatric Palliative Care Program (VPPCP) at the Royal Children's Hospital, Melbourne, and the Sydney Children's Hospital Network (SCHN) to pilot the 'Moments' program. As experts in 'positive disruption', with a long history of collaborating with health professionals, Starlight was well placed to deliver a program which helps children, adolescents, and their families to experience moments of joy, connection and achieve their own sense of accomplishment. Building on Starlight’s evidence-based approach and experience in creative service delivery, the program aims to use the power of 'positive disruption' to brighten the lives of this group and create important memories. The clinical and Starlight team members collaborate to ensure that the child and family are at the centre of the program. The design of each experience is specific to their needs and ensures the creation of positive memories and family connection. It aims for each moment to enhance quality of life. The partnership with the VPPCP and SCHN has allowed the program to reach families across metropolitan and regional locations. In late 2019 a formative evaluation of the pilot was conducted utilising both quantitative and qualitative methodologies to document both the delivery and outcomes of the program. Central to the evaluation was the interviews conducted with both clinical teams and families in order to gain a comprehensive understanding of the impact of and satisfaction with the program. The findings, which will be shared in this presentation, provide practical insight into the delivery of the program, the key elements for its success with families, and areas which could benefit from additional research and focus. It will use stories and case studies from the pilot to highlight the impact of the program and discuss what opportunities, challenges, and learnings emerged.

Keywords: children, families, memory making, pediatric palliative care, support

Procedia PDF Downloads 82
64 Raman Spectroscopy of Fossil-like Feature in Sooke #1 from Vancouver Island

Authors: J. A. Sawicki, C. Ebrahimi

Abstract:

The first geochemical, petrological, X-ray diffraction, Raman, Mössbauer, and oxygen isotopic analyses of very intriguing 13-kg Sooke #1 stone covered in 70% of its surface with black fusion crust, found in and recovered from Sooke Basin, near Juan de Fuca Strait, in British Columbia, were reported as poster #2775 at LPSC52 in March. Our further analyses reported in poster #6305 at 84AMMS in August and comparisons with the Mössbauer spectra of Martian meteorite MIL03346 and Martian rocks in Gusev Crater reported by Morris et al. suggest that Sooke #1 find could be a stony achondrite of Martian polymict breccia type ejected from early watery Mars. Here, the Raman spectra of a carbon-rich ~1-mm² fossil-like white area identified in this rock on a surface of polished cut have been examined in more detail. The low-intensity 532 nm and 633 nm beams of the InviaRenishaw microscope were used to avoid any destructive effects. The beam was focused through the microscope objective to a 2 m spot on a sample, and backscattered light collected through this objective was recorded with CCD detector. Raman spectra of dark areas outside fossil have shown bands of clinopyroxene at 320, 660, and 1020 cm-1 and small peaks of forsteritic olivine at 820-840 cm-1, in agreement with results of X-ray diffraction and Mössbauer analyses. Raman spectra of the white area showed the broad band D at ~1310 cm-1 consisting of main mode A1g at 1305 cm⁻¹, E2g mode at 1245 cm⁻¹, and E1g mode at 1355 cm⁻¹ due to stretching diamond-like sp3 bonds in diamond polytype lonsdaleite, as in Ovsyuk et al. study. The band near 1600 cm-1 mostly consists of D2 band at 1620 cm-1 and not of the narrower G band at 1583 cm⁻¹ due to E2g stretching in planar sp2 bonds that are fundamental building blocks of carbon allotropes graphite and graphene. In addition, the broad second-order Raman bands were observed with 532 nm beam at 2150, ~2340, ~2500, 2650, 2800, 2970, 3140, and ~3300 cm⁻¹ shifts. Second-order bands in diamond and other carbon structures are ascribed to the combinations of bands observed in the first-order region: here 2650 cm⁻¹ as 2D, 2970 cm⁻¹ as D+G, and 3140 cm⁻¹ as 2G ones. Nanodiamonds are abundant in the Universe, found in meteorites, interplanetary dust particles, comets, and carbon-rich stars. The diamonds in meteorites are presently intensely investigated using Raman spectroscopy. Such particles can be formed by CVD process and during major impact shocks at ~1000-2300 K and ~30-40 GPa. It cannot be excluded that the fossil discovered in Sooke #1 could be a remnant of an alien carbon organism that transformed under shock impact to nanodiamonds. We trust that for the benefit of research in astro-bio-geology of meteorites, asteroids, Martian rocks, and soil, this find deserves further, more thorough investigations. If possible, the Raman SHERLOCK spectrometer operating on the Perseverance Rover should also search for such objects in the Martian rocks.

Keywords: achondrite, nanodiamonds, lonsdaleite, raman spectra

Procedia PDF Downloads 125
63 Hybrid Materials on the Basis of Magnetite and Magnetite-Gold Nanoparticles for Biomedical Application

Authors: Mariia V. Efremova, Iana O. Tcareva, Anastasia D. Blokhina, Ivan S. Grebennikov, Anastasia S. Garanina, Maxim A. Abakumov, Yury I. Golovin, Alexander G. Savchenko, Alexander G. Majouga, Natalya L. Klyachko

Abstract:

During last decades magnetite nanoparticles (NPs) attract a deep interest of scientists due to their potential application in therapy and diagnostics. However, magnetite nanoparticles are toxic and non-stable in physiological conditions. To solve these problems, we decided to create two types of hybrid systems based on magnetite and gold which is inert and biocompatible: gold as a shell material (first type) and gold as separate NPs interfacially bond to magnetite NPs (second type). The synthesis of the first type hybrid nanoparticles was carried out as follows: Magnetite nanoparticles with an average diameter of 9±2 nm were obtained by co-precipitation of iron (II, III) chlorides then they were covered with gold shell by iterative reduction of hydrogen tetrachloroaurate with hydroxylamine hydrochloride. According to the TEM, ICP MS and EDX data, final nanoparticles had an average diameter of 31±4 nm and contained iron even after hydrochloric acid treatment. However, iron signals (K-line, 7,1 keV) were not localized so we can’t speak about one single magnetic core. Described nanoparticles covered with mercapto-PEG acid were non-toxic for human prostate cancer PC-3/ LNCaP cell lines (more than 90% survived cells as compared to control) and had high R2-relaxivity rates (>190 mМ-1s-1) that exceed the transverse relaxation rate of commercial MRI-contrasting agents. These nanoparticles were also used for chymotrypsin enzyme immobilization. The effect of alternating magnetic field on catalytic properties of chymotrypsin immobilized on magnetite nanoparticles, notably the slowdown of catalyzed reaction at the level of 35-40 % was found. The synthesis of the second type hybrid nanoparticles also involved two steps. Firstly, spherical gold nanoparticles with an average diameter of 9±2 nm were synthesized by the reduction of hydrogen tetrachloroaurate with oleylamine; secondly, they were used as seeds during magnetite synthesis by thermal decomposition of iron pentacarbonyl in octadecene. As a result, so-called dumbbell-like structures were obtained where magnetite (cubes with 25±6 nm diagonal) and gold nanoparticles were connected together pairwise. By HRTEM method (first time for this type of structure) an epitaxial growth of magnetite nanoparticles on gold surface with co-orientation of (111) planes was discovered. These nanoparticles were transferred into water by means of block-copolymer Pluronic F127 then loaded with anti-cancer drug doxorubicin and also PSMA-vector specific for LNCaP cell line. Obtained nanoparticles were found to have moderate toxicity for human prostate cancer cells and got into the intracellular space after 45 minutes of incubation (according to fluorescence microscopy data). These materials are also perspective from MRI point of view (R2-relaxivity rates >70 mМ-1s-1). Thereby, in this work magnetite-gold hybrid nanoparticles, which have a strong potential for biomedical application, particularly in targeted drug delivery and magnetic resonance imaging, were synthesized and characterized. That paves the way to the development of special medicine types – theranostics. The authors knowledge financial support from Ministry of Education and Science of the Russian Federation (14.607.21.0132, RFMEFI60715X0132). This work was also supported by Grant of Ministry of Education and Science of the Russian Federation К1-2014-022, Grant of Russian Scientific Foundation 14-13-00731 and MSU development program 5.13.

Keywords: drug delivery, magnetite-gold, MRI contrast agents, nanoparticles, toxicity

Procedia PDF Downloads 356
62 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis

Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski

Abstract:

The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.

Keywords: cloud service, geodata cube, multiresolution, raster geodata

Procedia PDF Downloads 110
61 Fort Conger: A Virtual Museum and Virtual Interactive World for Exploring Science in the 19th Century

Authors: Richard Levy, Peter Dawson

Abstract:

Ft. Conger, located in the Canadian Arctic was one of the most remote 19th-century scientific stations. Established in 1881 on Ellesmere Island, a wood framed structure established a permanent base from which to conduct scientific research. Under the charge of Lt. Greely, Ft. Conger was one of 14 expeditions conducted during the First International Polar Year (FIPY). Our research project “From Science to Survival: Using Virtual Exhibits to Communicate the Significance of Polar Heritage Sites in the Canadian Arctic” focused on the creation of a virtual museum website dedicated to one of the most important polar heritage site in the Canadian Arctic. This website was developed under a grant from Virtual Museum of Canada and enables visitors to explore the fort’s site from 1875 to the present, http://fortconger.org. Heritage sites are often viewed as static places. A goal of this project was to present the change that occurred over time as each new group of explorers adapted the site to their needs. The site was first visited by British explorer George Nares in 1875 – 76. Only later did the United States government select this site for the Lady Franklin Bay Expedition (1881-84) with research to be conducted under the FIPY (1882 – 83). Still later Robert Peary and Matthew Henson attempted to reach the North Pole from Ft. Conger in 1899, 1905 and 1908. A central focus of this research is on the virtual reconstruction of the Ft. Conger. In the summer of 2010, a Zoller+Fröhlich Imager 5006i and Minolta Vivid 910 laser scanner were used to scan terrain and artifacts. Once the scanning was completed, the point clouds were registered and edited to form the basis of a virtual reconstruction. A goal of this project has been to allow visitors to step back in time and explore the interior of these buildings with all of its artifacts. Links to text, historic documents, animations, panorama images, computer games and virtual labs provide explanations of how science was conducted during the 19th century. A major feature of this virtual world is the timeline. Visitors to the website can begin to explore the site when George Nares, in his ship the HMS Discovery, appeared in the harbor in 1875. With the emergence of Lt Greely’s expedition in 1881, we can track the progress made in establishing a scientific outpost. Still later in 1901, with Peary’s presence, the site is transformed again, with the huts having been built from materials salvaged from Greely’s main building. Still later in 2010, we can visit the site during its present state of deterioration and learn about the laser scanning technology which was used to document the site. The Science and Survival at Fort Conger project represents one of the first attempts to use virtual worlds to communicate the historical and scientific significance of polar heritage sites where opportunities for first-hand visitor experiences are not possible because of remote location.

Keywords: 3D imaging, multimedia, virtual reality, arctic

Procedia PDF Downloads 394
60 Burial Findings in Prehistory Qatar: Archaeological Perspective

Authors: Sherine El-Menshawy

Abstract:

Death, funerary beliefs and customs form an essential feature of belief systems and practices in many cultures. It is evident that during the pre-historical periods, various techniques of corpses burial and funerary rituals were conducted. Occasionally, corpses were merely buried in the sand, or in a grave where the body is placed in a contracted position- with knees drawn up under the chin and hands normally lying before the face- with mounds of sand, marking the grave or the bodies were burnt. However, common practice, that was demonstrable in the archaeological record, was burial. The earliest graves were very simple consisting of a shallow circular or oval pits in the ground. The current study focuses on the material culture at Qatar during the pre-historical period, specifically their funerary architecture and burial practices. Since information about burial customs and funerary practices in Qatar prehistory is both scarce and fragmentary, the importance of such study is to answer research questions related to funerary believes and burial habits during the early stages of civilization transformations at prehistory Qatar compared with Mesopotamia, since chronologically, the earliest pottery discovered in Qatar belongs to prehistoric Ubaid culture of Mesopotamia, that was collected from the excavations. This will lead to deep understanding of life and social status in pre-historical period at Qatar. The research also explores the relationship between pre-history Qatar funerary traditions and those of neighboring cultures in the Mesopotamia and Ancient Egypt, with the aim of ascertaining the distinctive aspects of pre-history Qatar culture, the reception of classical culture and the role it played in the creation of local cultural identities in the Near East. Methodologies of this study based on published books and articles in addition to unpublished reports of the Danish excavation team that excavated in and around Doha, Qatar archaeological sites from the 50th. The study is also constructed on compared material related to burial customs found in Mesopotamia. Therefore this current research: (i) Advances knowledge of the burial customs of the ancient people who inhabited Qatar, a study which is unknown recently to scholars, the study though will apply deep understanding of the history of ancient Qatar and its culture and values with an aim to share this invaluable human heritage. (ii) The study is of special significance for the field of studies, since evidence derived from the current study has great value for the study of living conditions, social structure, religious beliefs and ritual practices. (iii) Excavations brought to light burials of different categories. The graves date to the bronze and Iron ages. Their structure varies between mounds above the ground or burials below the ground level. Evidence comes from sites such as Al-Da’asa, Ras Abruk, and Al-Khor. Painted Ubaid sherds of Mesopotamian culture have been discovered in Qatar from sites such as Al-Da’asa, Ras Abruk, and Bir Zekrit. In conclusion, there is no comprehensive study which has been done and lack of general synthesis of information about funerary practices is problematic. Therefore, the study will fill in the gaps in the area.

Keywords: archaeological, burial, findings, prehistory, Qatar

Procedia PDF Downloads 117
59 The Employment of Unmanned Aircraft Systems for Identification and Classification of Helicopter Landing Zones and Airdrop Zones in Calamity Situations

Authors: Marielcio Lacerda, Angelo Paulino, Elcio Shiguemori, Alvaro Damiao, Lamartine Guimaraes, Camila Anjos

Abstract:

Accurate information about the terrain is extremely important in disaster management activities or conflict. This paper proposes the use of the Unmanned Aircraft Systems (UAS) at the identification of Airdrop Zones (AZs) and Helicopter Landing Zones (HLZs). In this paper we consider the AZs the zones where troops or supplies are dropped by parachute, and HLZs areas where victims can be rescued. The use of digital image processing enables the automatic generation of an orthorectified mosaic and an actual Digital Surface Model (DSM). This methodology allows obtaining this fundamental information to the terrain’s comprehension post-disaster in a short amount of time and with good accuracy. In order to get the identification and classification of AZs and HLZs images from DJI drone, model Phantom 4 have been used. The images were obtained with the knowledge and authorization of the responsible sectors and were duly registered in the control agencies. The flight was performed on May 24, 2017, and approximately 1,300 images were obtained during approximately 1 hour of flight. Afterward, new attributes were generated by Feature Extraction (FE) from the original images. The use of multispectral images and complementary attributes generated independently from them increases the accuracy of classification. The attributes of this work include the Declivity Map and Principal Component Analysis (PCA). For the classification four distinct classes were considered: HLZ 1 – small size (18m x 18m); HLZ 2 – medium size (23m x 23m); HLZ 3 – large size (28m x 28m); AZ (100m x 100m). The Decision Tree method Random Forest (RF) was used in this work. RF is a classification method that uses a large collection of de-correlated decision trees. Different random sets of samples are used as sampled objects. The results of classification from each tree and for each object is called a class vote. The resulting classification is decided by a majority of class votes. In this case, we used 200 trees for the execution of RF in the software WEKA 3.8. The classification result was visualized on QGIS Desktop 2.12.3. Through the methodology used, it was possible to classify in the study area: 6 areas as HLZ 1, 6 areas as HLZ 2, 4 areas as HLZ 3; and 2 areas as AZ. It should be noted that an area classified as AZ covers the classifications of the other classes, and may be used as AZ, HLZ of large size (HLZ3), medium size (HLZ2) and small size helicopters (HLZ1). Likewise, an area classified as HLZ for large rotary wing aircraft (HLZ3) covers the smaller area classifications, and so on. It was concluded that images obtained through small UAV are of great use in calamity situations since they can provide data with high accuracy, with low cost, low risk and ease and agility in obtaining aerial photographs. This allows the generation, in a short time, of information about the features of the terrain in order to serve as an important decision support tool.

Keywords: disaster management, unmanned aircraft systems, helicopter landing zones, airdrop zones, random forest

Procedia PDF Downloads 151
58 Numerical Model of Crude Glycerol Autothermal Reforming to Hydrogen-Rich Syngas

Authors: A. Odoom, A. Salama, H. Ibrahim

Abstract:

Hydrogen is a clean source of energy for power production and transportation. The main source of hydrogen in this research is biodiesel. Glycerol also called glycerine is a by-product of biodiesel production by transesterification of vegetable oils and methanol. This is a reliable and environmentally-friendly source of hydrogen production than fossil fuels. A typical composition of crude glycerol comprises of glycerol, water, organic and inorganic salts, soap, methanol and small amounts of glycerides. Crude glycerol has limited industrial application due to its low purity thus, the usage of crude glycerol can significantly enhance the sustainability and production of biodiesel. Reforming techniques is an approach for hydrogen production mainly Steam Reforming (SR), Autothermal Reforming (ATR) and Partial Oxidation Reforming (POR). SR produces high hydrogen conversions and yield but is highly endothermic whereas POR is exothermic. On the downside, PO yields lower hydrogen as well as large amount of side reactions. ATR which is a fusion of partial oxidation reforming and steam reforming is thermally neutral because net reactor heat duty is zero. It has relatively high hydrogen yield, selectivity as well as limits coke formation. The complex chemical processes that take place during the production phases makes it relatively difficult to construct a reliable and robust numerical model. Numerical model is a tool to mimic reality and provide insight into the influence of the parameters. In this work, we introduce a finite volume numerical study for an 'in-house' lab-scale experiment of ATR. Previous numerical studies on this process have considered either using Comsol or nodal finite difference analysis. Since Comsol is a commercial package which is not readily available everywhere and lab-scale experiment can be considered well mixed in the radial direction. One spatial dimension suffices to capture the essential feature of ATR, in this work, we consider developing our own numerical approach using MATLAB. A continuum fixed bed reactor is modelled using MATLAB with both pseudo homogeneous and heterogeneous models. The drawback of nodal finite difference formulation is that it is not locally conservative which means that materials and momenta can be generated inside the domain as an artifact of the discretization. Control volume, on the other hand, is locally conservative and suites very well problems where materials are generated and consumed inside the domain. In this work, species mass balance, Darcy’s equation and energy equations are solved using operator splitting technique. Therefore, diffusion-like terms are discretized implicitly while advection-like terms are discretized explicitly. An upwind scheme is adapted for the advection term to ensure accuracy and positivity. Comparisons with the experimental data show very good agreements which build confidence in our modeling approach. The models obtained were validated and optimized for better results.

Keywords: autothermal reforming, crude glycerol, hydrogen, numerical model

Procedia PDF Downloads 120
57 Statistical Models and Time Series Forecasting on Crime Data in Nepal

Authors: Dila Ram Bhandari

Abstract:

Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.

Keywords: time series analysis, forecasting, ARIMA, machine learning

Procedia PDF Downloads 141
56 A Designing 3D Model: Castle of the Mall-Dern

Authors: Nanadcha Sinjindawong

Abstract:

This article discusses the design process of a community mall called Castle of The Mall-dern. The concept behind this mall is to combine elements of a medieval castle with modern architecture. The author aims to create a building that fits into the surroundings while also providing users with the vibes of the ancient era. The total area used for the mall is 4,000 square meters, with three floors. The first floor is 1,500 square meters, the second floor is 1,750 square meters, and the third floor is 750 square meters. Research Aim: The aim of this research is to design a community mall that sells ancient clothes and accessories, and to combine sustainable architectural design with the ideas of ancient architecture in an urban area with convenient transportation. Methodology: The research utilizes qualitative research methods in architectural design. The process begins with calculating the given area and dividing it into different zones. The author then sketches and draws the plan of each floor, adding the necessary rooms based on the floor areas mentioned earlier. The program "SketchUp" is used to create an online 3D model of the community mall, and a physical model is built for presentation purposes on A1 paper, explaining all the details. Findings: The result of this research is a community mall with various amenities. The first floor includes retail shops, clothing stores, a food center, and a service zone. Additionally, there is an indoor garden with a fountain and a tree for relaxation. The second and third floors feature a void in the middle, with a few stores, cafes, restaurants, and studios on the second floor. The third floor is home to the administration and security control room, as well as a community gathering area designed as a public library with a café inside. Theoretical Importance: This research contributes to the field of sustainable architectural design by combining ancient architectural ideas with modern elements. It showcases the potential for creating buildings that blend historical aesthetics with contemporary functionality. Data Collection and Analysis Procedures: The data for this research is collected through a combination of area calculation, sketching, and building a 3D model. The analysis involves evaluating the design based on the allocated area, zoning, and functional requirements for a community mall. Question Addressed: The research addresses the question of how to design a community mall with a theme of ancient Medieval and Victorian eras. It explores how to combine sustainable architectural design principles with historical aesthetics to create a functional and visually appealing space. Conclusion: In conclusion, this research successfully designs a community mall called “Castle of The Mall-dern” that incorporates elements of Medieval and Victorian architecture. The building encompasses various zones, including retail shops, restaurants, community gathering areas, and service zones. It also features an interior garden and a public library within the mall. The research contributes to the field of sustainable architectural design by showcasing the potential for combining ancient architectural ideas with modern elements in an urban setting.

Keywords: 3D model, community mall, modern architecture, medieval architecture

Procedia PDF Downloads 72