Search results for: heterogeneous combat network
2743 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint
Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar
Abstract:
Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine
Procedia PDF Downloads 842742 A Theoretical Approach of Tesla Pump
Authors: Cristian Sirbu-Dragomir, Stefan-Mihai Sofian, Adrian Predescu
Abstract:
This paper aims to study Tesla pumps for circulating biofluids. It is desired to make a small pump for the circulation of biofluids. This type of pump will be studied because it has the following characteristics: It doesn’t have blades which results in very small frictions; Reduced friction forces; Low production cost; Increased adaptability to different types of fluids; Low cavitation (towards 0); Low shocks due to lack of blades; Rare maintenance due to low cavity; Very small turbulences in the fluid; It has a low number of changes in the direction of the fluid (compared to rotors with blades); Increased efficiency at low powers.; Fast acceleration; The need for a low torque; Lack of shocks in blades at sudden starts and stops. All these elements are necessary to be able to make a small pump that could be inserted into the thoracic cavity. The pump will be designed to combat myocardial infarction. Because the pump must be inserted in the thoracic cavity, elements such as Low friction forces, shocks as low as possible, low cavitation and as little maintenance as possible are very important. The operation should be performed once, without having to change the rotor after a certain time. Given the very small size of the pump, the blades of a classic rotor would be very thin and sudden starts and stops could cause considerable damage or require a very expensive material. At the same time, being a medical procedure, the low cost is important in order to be easily accessible to the population. The lack of turbulence or vortices caused by a classic rotor is again a key element because when it comes to blood circulation, the flow must be laminar and not turbulent. The turbulent flow can even cause a heart attack. Due to these aspects, Tesla's model could be ideal for this work. Usually, the pump is considered to reach an efficiency of 40% being used for very high powers. However, the author of this type of pump claimed that the maximum efficiency that the pump can achieve is 98%. The key element that could help to achieve this efficiency or one as close as possible is the fact that the pump will be used for low volumes and pressures. The key elements to obtain the best efficiency for this model are the number of rotors placed in parallel and the distance between them. The distance between them must be small, which helps to obtain a pump as small as possible. The principle of operation of such a rotor is to place in several parallel discs cut inside. Thus the space between the discs creates the vacuum effect by pulling the liquid through the holes in the rotor and throwing it outwards. Also, a very important element is the viscosity of the liquid. It dictates the distance between the disks to achieve a lossless power flow.Keywords: lubrication, temperature, tesla-pump, viscosity
Procedia PDF Downloads 1792741 Associations between Physical Activity and Risk Factors for Type II Diabetes in Prediabetic Adults
Authors: Rukia Yosuf
Abstract:
Diabetes is a national healthcare crisis related to both macrovascular and microvascular complications. We hypothesized that higher levels of physical activity are associated with lower total and visceral fat mass, lower systolic blood pressure, and increased insulin sensitivity. Participant inclusion criteria: 21-50 years old, BMI ≥ 30 kg/m2, hemoglobin A1C 5.7-6.4, fasting glucose 100-125 mg/dL, and HOMA IR ≥ 2.5. Exclusion criteria: history of diabetes, hypertension, HIV, renal disease, hearing loss, alcoholic intake over four drinks daily, use of organic nitrates or PDE5 inhibitors, and decreased cardiac function. Total physical activity was measured using accelerometers, body composition using DXA, and insulin resistance via fsIVGTT. Clinical and biochemical cardiometabolic risk factors, blood pressure and heart rate were obtained using a calibrated sphygmomanometer. Anthropometric measures, fasting glucose, insulin, lipid profile, C-reactive protein, and BMP were analyzed using standard procedures. Within our study, we found correlations between levels of physical activity in a heterogeneous group of prediabetic adults. Patients with more physical activity had a higher degree of insulin sensitivity, lower blood pressure, total visceral adipose tissue, and overall lower total mass. Total physical activity levels showed small, but significant correlations with systolic blood pressure, visceral fat, lean mass and insulin sensitivity. After normalizing for the race, age, and gender using multiple regression, these associations were no longer significant considering our small sample size. More research into prediabetes will decrease the population of diabetics overall. In the future, we could increase sample size and conduct cross sectional and longitudinal studies in various populations with prediabetes.Keywords: diabetes, kidney disease, nephrology, prediabetes
Procedia PDF Downloads 1872740 The Multiple Sclerosis condition and the Role of Varicella-zoster virus in its Progression
Authors: Sina Mahdavi, Mahdi Asghari Ozma
Abstract:
Multiple sclerosis (MS) is the most common inflammatory autoimmune disease of the CNS that affects the myelination process in the central nervous system (CNS). Complex interactions of various "environmental or infectious" factors may act as triggers in autoimmunity and disease progression. The association between viral infections, especially human Varicella-zoster virus (VZV) and MS is one potential cause that is not well understood. This study aims to summarize the available data on VZV retrovirus infection in MS disease progression. For this study, the keywords "Multiple sclerosis", " Human Varicella-zoster virus ", and "central nervous system" in the databases PubMed, Google Scholar, Sid, and MagIran between 2016 and 2022 were searched and 14 articles were chosen, studied, and analyzed. Analysis of the amino acid sequences of HNRNPA1 with VZV proteins has shown a 62% amino acid sequence similarity between VZV gE and the PrLD/M9 epitope region (TNPO1 binding domain) of mutant HNRNPA1. A heterogeneous nuclear ribonucleoprotein (hnRNP), which is produced by HNRNPA1, is involved in the processing and transfer of mRNA and pre-mRNA. Mutant HNRNPA1 mimics gE of VZV as an antigen that leads to autoantibody production. Mutant HnRNPA1 translocates to the cytoplasm, after aggregation is presented by MHC class I, followed by CD8 + cells. Of these, antibodies and immune cells against the gE epitopes of VZV remain due to the memory immune response, causing neurodegeneration and the development of MS in genetically predisposed individuals. VZV expression during the course of MS is present in genetically predisposed individuals with HNRNPA1 mutation, suggesting a link between VZV and MS, and that this virus may play a role in the development of MS by inducing an inflammatory state. Therefore, measures to modulate VZV expression may be effective in reducing inflammatory processes in demyelinated areas of MS patients in genetically predisposed individuals.Keywords: multiple sclerosis, varicella-zoster virus, central nervous system, autoimmunity
Procedia PDF Downloads 762739 A Study of Resin-Dye Fixation on Dyeing Properties of Cotton Fabrics Using Melamine Based Resins and a Reactive Dye
Authors: Nurudeen Ayeni, Kasali Bello, Ovi Abayeh
Abstract:
Study of the effect of dye–resin complexation on the degree of dye absorption were carried out using Procion Blue MX-R to dye cotton fabric in the presence hexamethylol melamine (MR 6) and its phosphate derivative (MPR 4) for resination. The highest degree of dye exhaustion was obtained at 400 C for 1 hour with the resinated fabric showing more affinity for the dye than the ordinary fiber. Improved fastness properties was recorded which show a relatively higher stability of dye–resin–cellulose network formed.Keywords: cotton fabric, reactive dye, dyeing, resination
Procedia PDF Downloads 4082738 Designing Intelligent Adaptive Controller for Nonlinear Pendulum Dynamical System
Authors: R. Ghasemi, M. R. Rahimi Khoygani
Abstract:
This paper proposes the designing direct adaptive neural controller to apply for a class of a nonlinear pendulum dynamic system. The radial basis function (RBF) neural adaptive controller is robust in presence of external and internal uncertainties. Both the effectiveness of the controller and robustness against disturbances are importance of this paper. The simulation results show the promising performance of the proposed controller.Keywords: adaptive neural controller, nonlinear dynamical, neural network, RBF, driven pendulum, position control
Procedia PDF Downloads 4822737 From Homogeneous to Phase Separated UV-Cured Interpenetrating Polymer Networks: Influence of the System Composition on Properties and Microstructure
Authors: Caroline Rocco, Feyza Karasu, Céline Croutxé-Barghorn, Xavier Allonas, Maxime Lecompère, Gérard Riess, Yujing Zhang, Catarina Esteves, Leendert van der Ven, Rolf van Benthem Gijsbertus de With
Abstract:
Acrylates are widely used in UV-curing technology. Their high reactivity can, however, limit their conversion due to early vitrification. In addition, the free radical photopolymerization is known to be sensitive to oxygen inhibition leading to tacky surfaces. Although epoxides can lead to full polymerization, they are sensitive to humidity and exhibit low polymerization rate. To overcome the intrinsic limitations of both classes of monomers, Interpenetrating Polymer Networks (IPNs) can be synthesized. They consist of at least two cross linked polymers which are permanently entangled. They can be achieved under thermal and/or light induced polymerization in one or two steps approach. IPNs can display homogeneous to heterogeneous morphologies with various degrees of phase separation strongly linked to the monomer miscibility and also synthesis parameters. In this presentation, we synthesize UV-cured methacrylate - epoxide based IPNs with different chemical compositions in order to get a better understanding of their formation and phase separation. Miscibility before and during the photopolymerization, reaction kinetics, as well as mechanical properties and morphology have been investigated. The key parameters controlling the morphology and the phase separation, namely monomer miscibility and synthesis parameters have been identified. By monitoring the stiffness changes on the film surface, atomic force acoustic microscopy (AFAM) gave, in conjunction with polymerization kinetic profiles and thermomechanical properties, explanations and corroborated the miscibility predictions. When varying the methacrylate / epoxide ratio, it was possible to move from a miscible and highly-interpenetrated IPN to a totally immiscible and phase-separated one.Keywords: investigation of properties and morphology, kinetics, phase separation, UV-cured IPNs
Procedia PDF Downloads 3702736 ANDASA: A Web Environment for Artistic and Cultural Data Representation
Authors: Carole Salis, Marie F. Wilson, Fabrizio Murgia, Cristian Lai, Franco Atzori, Giulia M. Orrù
Abstract:
ANDASA is a knowledge management platform for the capitalization of knowledge and cultural assets for the artistic and cultural sectors. It was built based on the priorities expressed by the participating artists. Through mapping artistic activities and specificities, it enables to highlight various aspects of the artistic research and production. Such instrument will contribute to create networks and partnerships, as it enables to evidentiate who does what, in what field, using which methodology. The platform is accessible to network participants and to the general public.Keywords: cultural promotion, knowledge representation, cultural maping, ICT
Procedia PDF Downloads 4272735 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls
Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac
Abstract:
No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations
Procedia PDF Downloads 3192734 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 902733 A Near Ambient Pressure X-Ray Photoelectron Spectroscopy Study on Platinum Nanoparticles Supported on Zr-Based Metal Organic Frameworks
Authors: Reza Vakili, Xiaolei Fan, Alex Walton
Abstract:
The first near ambient pressure (NAP)-XPS study of CO oxidation over Pt nanoparticles (NPs) incorporated into Zr-based UiO (UiO for Universitetet i Oslo) MOFs was carried out. For this purpose, the MOF-based Catalysts were prepared by wetness impregnation (WI-PtNPs@UiO-67) and linker design (LD-PtNPs@UiO-67) methods along with PtNPs@ZrO₂ as the control catalyst. Firstly, the as-synthesized catalysts were reduced in situ prior to the operando XPS analysis. The existence of Pt(II) species was proved in UiO-67 by observing Pt 4f core level peaks at a high binding energy of 72.6 ± 0.1 eV. However, by heating the WI-PtNPs@UiO-67 catalyst in situ to 200 °C under vacuum, the higher BE components disappear, leaving only the metallic Pt 4f doublet, confirming the formation of Pt NPs. The complete reduction of LD-PtNPs@UiO-67 is achieved at 250 °C and 1 mbar H₂. To understand the chemical state of Pt NPs in UiO-67 during catalytic turnover, we analyzed the Pt 4f region using operando NAP-XPS in the temperature-programmed measurements (100-260 °C) with reference to PtNPs@ZrO₂ catalyst. CO conversion during NAP-XPS experiments with the stoichiometric mixture shows that LD-PtNPs@UiO-67 has a better CO turnover frequency (TOF, 0.066 s⁻¹ at 260 °C) than the other two (ca. 0.055 s⁻¹). Pt 4f peaks only show one chemical species present at all temperatures, but the core level BE shifts change as a function of reaction temperature, i.e., Pt 4f peak from 71.8 eV at T < 200 °C to 71.2 eV at T > 200 °C. As this higher BE state of 71.8 eV was not observed after in situ reductions of the catalysts and only once the CO/O₂ mixture was introduced, we attribute it to the surface saturation of Pt NPs with adsorbed CO. In general, the quantitative analysis of Pt 4f data from the operando NAP-XPS experiments shows that the surface chemistry of the Pt active phase in the two PtNPs@UiO-67 catalysts is the same, comparable to that of PtNPs@ZrO₂. The observed difference in the catalytic activity can be attributed to the particle sizes of Pt NPs, as well as the dispersion of active phase in the support, which are different in the three catalysts.Keywords: CO oxidation, heterogeneous catalysis, MOFs, Metal Organic Frameworks, NAP-XPS, Near Ambient Pressure X-ray Photoelectron Spectroscopy
Procedia PDF Downloads 1392732 Structural and Morphological Characterization of Inorganic Deposits in Spinal Ligaments
Authors: Sylwia Orzechowska, Andrzej Wróbel, Eugeniusz Rokita
Abstract:
The mineralization is a curious problem of connective tissues. Factors which may play a decisive role in the regulation of the yellow ligaments (YL) mineralization are still open questions. The aim of the studies was a detailed description of the chemical composition and morphology of mineral deposits in the human yellow ligaments. Investigations of the structural features of deposits were used to explain the impact of various factors on mineralization process. The studies were carried out on 24 YL samples, surgically removed from patients suffer from spinal canal stenosis and the patients who sustained a trauma. The micro-computed tomography was used to describe the morphology of mineral deposits. The X-ray fluorescence method and Fourier transform infrared spectroscopy were applied to determine the chemical composition of the samples. In order to eliminate the effect of blur in microtomographic images, the correction method of partial volume effect was used. The mineral deposits appear in 60% of YL samples, both in patients with a stenosis and following injury. The mineral deposits have a heterogeneous structure and they are a mixture of the tissue and mineral grains. The volume of mineral grains amounts to (1.9 ± 3.4)*10-3 mm3 while the density distribution of grains occurs in two distinct ranges (1.75 - 2.15 and 2.15-2.5) g/cm3. Application of the partial volume effect correction allows accurate calculations by eliminating the averaging effect of gray levels in tomographic images. The B-type carbonate-containing hydroxyapatite constitutes the mineral phase of majority YLs. The main phase of two samples was calcium pyrophosphate dihydrate (CPPD). The elemental composition of minerals in all samples is almost identical. This pathology may be independent on the spine diseases and it does not evoke canal stenosis. The two ranges of grains density indicate two stages of grains growth and the degree of maturity. The presence of CPPD crystals may coexist with other pathologies.Keywords: FTIR, micro-tomography, mineralization, spinal ligaments
Procedia PDF Downloads 3772731 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: computational analysis, gendered grammar, misogynistic language, neural networks
Procedia PDF Downloads 1212730 The Impact of City Mobility on Propagation of Infectious Diseases: Mathematical Modelling Approach
Authors: Asrat M.Belachew, Tiago Pereira, Institute of Mathematics, Computer Sciences, Avenida Trabalhador São Carlense, 400, São Carlos, 13566-590, Brazil
Abstract:
Infectious diseases are among the most prominent threats to human beings. They cause morbidity and mortality to an individual and collapse the social, economic, and political systems of the whole world collectively. Mathematical models are fundamental tools and provide a comprehensive understanding of how infectious diseases spread and designing the control strategy to mitigate infectious diseases from the host population. Modeling the spread of infectious diseases using a compartmental model of inhomogeneous populations is good in terms of complexity. However, in the real world, there is a situation that accounts for heterogeneity, such as ages, locations, and contact patterns of the population which are ignored in a homogeneous setting. In this work, we study how classical an SEIR infectious disease spreading of the compartmental model can be extended by incorporating the mobility of population between heterogeneous cities during an outbreak of infectious disease. We have formulated an SEIR multi-cities epidemic spreading model using a system of 4k ordinary differential equations to describe the disease transmission dynamics in k-cities during the day and night. We have shownthat the model is epidemiologically (i.e., variables have biological interpretation) and mathematically (i.e., a unique bounded solution exists all the time) well-posed. We constructed the next-generation matrix (NGM) for the model and calculated the basic reproduction number R0for SEIR-epidemic spreading model with cities mobility. R0of the disease depends on the spectral radius mobility operator, and it is a threshold between asymptotic stability of the disease-free equilibrium and disease persistence. Using the eigenvalue perturbation theorem, we showed that sending a fraction of the population between cities decreases the reproduction number of diseases in interconnected cities. As a result, disease transmissiondecreases in the population.Keywords: SEIR-model, mathematical model, city mobility, epidemic spreading
Procedia PDF Downloads 1092729 Development of an Improved Paradigm for the Tourism Sector in the Department of Huila, Colombia: A Theoretical and Empirical Approach
Authors: Laura N. Bolivar T.
Abstract:
The tourism importance for regional development is mainly highlighted by the collaborative, cooperating and competitive relationships of the involved agents. The fostering of associativity processes, in particular, the cluster approach emphasizes the beneficial outcomes from the concentration of enterprises, where innovation and entrepreneurship flourish and shape the dynamics for tourism empowerment. Considering the department of Huila, it is located in the south-west of Colombia and holds the biggest coffee production in the country, although it barely contributes to the national GDP. Hence, its economic development strategy is looking for more dynamism and Huila could be consolidated as a leading destination for cultural, ecological and heritage tourism, if at least the public policy making processes for the tourism management of La Tatacoa Desert, San Agustin Park and Bambuco’s National Festival, were implemented in a more efficient manner. In this order of ideas, this study attempts to address the potential restrictions and beneficial factors for the consolidation of the tourism sector of Huila-Colombia as a cluster and how could it impact its regional development. Therefore, a set of theoretical frameworks such as the Tourism Routes Approach, the Tourism Breeding Environment, the Community-based Tourism Method, among others, but also a collection of international experiences describing tourism clustering processes and most outstanding problematics, is analyzed to draw up learning points, structure of proceedings and success-driven factors to be contrasted with the local characteristics in Huila, as the region under study. This characterization involves primary and secondary information collection methods and comprises the South American and Colombian context together with the identification of involved actors and their roles, main interactions among them, major tourism products and their infrastructure, the visitors’ perspective on the situation and a recap of the related needs and benefits regarding the host community. Considering the umbrella concepts, the theoretical and the empirical approaches, and their comparison with the local specificities of the tourism sector in Huila, an array of shortcomings is analytically constructed and a series of guidelines are proposed as a way to overcome them and simultaneously, raise economic development and positively impact Huila’s well-being. This non-exhaustive bundle of guidelines is focused on fostering cooperating linkages in the actors’ network, dealing with Information and Communication Technologies’ innovations, reinforcing the supporting infrastructure, promoting the destinations considering the less known places as well, designing an information system enabling the tourism network to assess the situation based on reliable data, increasing competitiveness, developing participative public policy-making processes and empowering the host community about the touristic richness. According to this, cluster dynamics would drive the tourism sector to meet articulation and joint effort, then involved agents and local particularities would be adequately assisted to cope with the current changing environment of globalization and competition.Keywords: innovative strategy, local development, network of tourism actors, tourism cluster
Procedia PDF Downloads 1422728 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery
Authors: Forouzan Salehi Fergeni
Abstract:
Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine
Procedia PDF Downloads 522727 Forest Fire Burnt Area Assessment in a Part of West Himalayan Region Using Differenced Normalized Burnt Ratio and Neural Network Approach
Authors: Sunil Chandra, Himanshu Rawat, Vikas Gusain, Triparna Barman
Abstract:
Forest fires are a recurrent phenomenon in the Himalayan region owing to the presence of vulnerable forest types, topographical gradients, climatic weather conditions, and anthropogenic pressure. The present study focuses on the identification of forest fire-affected areas in a small part of the West Himalayan region using a differential normalized burnt ratio method and spectral unmixing methods. The study area has a rugged terrain with the presence of sub-tropical pine forest, montane temperate forest, and sub-alpine forest and scrub. The major reason for fires in this region is anthropogenic in nature, with the practice of human-induced fires for getting fresh leaves, scaring wild animals to protect agricultural crops, grazing practices within reserved forests, and igniting fires for cooking and other reasons. The fires caused by the above reasons affect a large area on the ground, necessitating its precise estimation for further management and policy making. In the present study, two approaches have been used for carrying out a burnt area analysis. The first approach followed for burnt area analysis uses a differenced normalized burnt ratio (dNBR) index approach that uses the burnt ratio values generated using the Short-Wave Infrared (SWIR) band and Near Infrared (NIR) bands of the Sentinel-2 image. The results of the dNBR have been compared with the outputs of the spectral mixing methods. It has been found that the dNBR is able to create good results in fire-affected areas having homogenous forest stratum and with slope degree <5 degrees. However, in a rugged terrain where the landscape is largely influenced by the topographical variations, vegetation types, tree density, the results may be largely influenced by the effects of topography, complexity in tree composition, fuel load composition, and soil moisture. Hence, such variations in the factors influencing burnt area assessment may not be effectively carried out using a dNBR approach which is commonly followed for burnt area assessment over a large area. Hence, another approach that has been attempted in the present study utilizes a spectral mixing method where the individual pixel is tested before assigning an information class to it. The method uses a neural network approach utilizing Sentinel-2 bands. The training and testing data are generated from the Sentinel-2 data and the national field inventory, which is further used for generating outputs using ML tools. The analysis of the results indicates that the fire-affected regions and their severity can be better estimated using spectral unmixing methods, which have the capability to resolve the noise in the data and can classify the individual pixel to the precise burnt/unburnt class.Keywords: categorical data, log linear modeling, neural network, shifting cultivation
Procedia PDF Downloads 562726 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning
Authors: Akeel A. Shah, Tong Zhang
Abstract:
Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning
Procedia PDF Downloads 422725 Child Labour and Contemporary Slavery: A Nigerian Perspective
Authors: Obiageli Eze
Abstract:
Millions of Nigerian children are subjected daily to all forms of abuse, ranging from trafficking to slavery, and forced labor. These under age children are taken from different parts of the Country to be used as sex slaves and laborers in the big cities, killed for rituals, organ transplantation, or used for money laundering, begging on the streets or are put to work in the fields. These children are made to do inhuman jobs under degrading conditions and face all kinds of abuse at the hands of their owners with no hope of escape. While lots of people blame poverty or culture as a basis for human trafficking in Nigeria, the National Agency for the Prohibition and Trafficking in Persons and other Related Matters (NAPTIP) says other causes of the outrageous rate of human trafficking in the country are ignorance, desperation, and the promotion and commercialization of sex by the European Union (EU) as dozens of young Nigerian children and women are forced to work as prostitutes in European countries including the Netherlands, France, Italy, and Spain. In the cause of searching for greener pastures, they are coerced into work they have not chosen and subjected to perpetual life in bondage. The Universal Declaration of Human Rights 1948 prohibits slave trade and slavery. Despite the fact that Nigeria is a Sovereign member of the United Nations and signatory to this International instrument, Child trafficking and slavery is still on the increase. This may be caused by the fact that the punishment for this crime in Nigeria is a maximum term of 10 years imprisonment with some of the worst offenders getting off with as little as 2 years imprisonment or an option of fine. It goes without saying that this punishment is not sufficient to act as a deterrent to these modern slave traders. Another major factor oiling the wheel of trafficking in the country is voodoo. The victims are taken to shrines of voodoo priests for oath taking. There, underage girls and boys are made to swear that they would never reveal the identities of their traffickers to anyone if arrested whether in the course of the journey or in the destination countries and that they would pay off debt. Nigeria needs tougher Laws in order to be able to combat human trafficking and slave trade. Also there has to be aggressive sensitization and awareness programs designed to educate and enlighten the public as to the dangers faced by these victims and the need to report any suspicious activity to the authorities. This paper attempts to give an insight into the plight of under-age Nigerian children trafficked and sold as slaves and offer a more effective stand in the fight against it.Keywords: child labor, slavery, slave trade, trafficking
Procedia PDF Downloads 5052724 Artificial Neural Networks in Environmental Psychology: Application in Architectural Projects
Authors: Diego De Almeida Pereira, Diana Borchenko
Abstract:
Artificial neural networks are used for many applications as they are able to learn complex nonlinear relationships between input and output data. As the number of neurons and layers in a neural network increases, it is possible to represent more complex behaviors. The present study proposes that artificial neural networks are a valuable tool for architecture and engineering professionals concerned with understanding how buildings influence human and social well-being based on theories of environmental psychology.Keywords: environmental psychology, architecture, neural networks, human and social well-being
Procedia PDF Downloads 4992723 Diversity in the Community - The Disability Perspective
Authors: Sarah Reker, Christiane H. Kellner
Abstract:
From the perspective of people with disabilities, inequalities can also emerge from spatial segregation, the lack of social contacts or limited economic resources. In order to reduce or even eliminate these disadvantages and increase general well-being, community-based participation as well as decentralisation efforts within exclusively residential homes is essential. Therefore, the new research project “Index for participation development and quality of life for persons with disabilities”(TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at a large residential complex and service provider for persons with disabilities in the outskirts of Munich aims to assist the development of community-based living environments. People with disabilities should be able to participate in social life beyond the confines of the institution. Since a diverse society is a society in which different individual needs and wishes can emerge and be catered to, the ultimate goal of the project is to create an environment for all citizens–regardless of disability, age or ethnic background–that accommodates their daily activities and requirements. The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centered design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like center will be remodeled to open up the community to all people. This strategy should lead to more equal opportunities and open the way for a much more diverse community. Therefore, macro-level research questions were inspired by quality of life theory and were formulated as follows for different dimensions: •The user dimension: what needs and necessities can we identify? Are needs person-related? Are there any options to choose from? What type of quality of life can we identify? The economic dimension: what resources (both material and staff-related) are available in the region? (How) are they used? What costs (can) arise and what effects do they entail? •The environment dimension: what “environmental factors” such as access (mobility and absence of barriers) prove beneficial or impedimental? In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees with person-centered thinking). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one project more in-depth, namely “Outpatient Housing Options for Children and Teenagers with Disabilities”. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. The most central questions pertaining to this part of the research were the following: •How have the existing network relations been designed? •What meaning (or significance) does the existing service offers and structures have for the everyday life of an external residential group? These issues underpinned the environmental analyses as well as the qualitative guided interviews and qualitative network analyses we carried out.Keywords: decentralisation, environmental analyses, outpatient housing options for children and teenagers with disabilities, qualitative network analyses
Procedia PDF Downloads 3662722 The Novelty of Mobile Money Solution to Ghana’S Cashless Future: Opportunities, Challenges and Way Forward
Authors: Julius Y Asamoah
Abstract:
Mobile money has seen faster adoption in the decade. Its emergence serves as an essential driver of financial inclusion and an innovative financial service delivery channel, especially to the unbanked population. The rising importance of mobile money services has caught policymakers and regulators' attention, seeking to understand the many issues emerging from this context. At the same time, it is unlocking the potential of knowledge of this new technology. Regulatory responses and support are essential, requiring significant changes to current regulatory practices in Ghana. The article aims to answer the following research questions: "What risk does an unregulated mobile money service pose to consumers and the financial system? "What factors stimulate and hinder the introduction of mobile payments in developing countries? The sample size used was 250 respondents selected from the study area. The study has adopted an analytical approach comprising a combination of qualitative and quantitative data collection methods. Actor-network theory (ANT) is used as an interpretive lens to analyse this process. ANT helps analyse how actors form alliances and enrol other actors, including non-human actors (i.e. technology), to secure their interests. The study revealed that government regulatory policies impact mobile money as critical to mobile money services in developing countries. Regulatory environment should balance the needs of advancing access to finance with the financial system's stability and draw extensively from Kenya's work as the best strategies for the system's players. Thus, regulators need to address issues related to the enhancement of supportive regulatory frameworks. It recommended that the government involve various stakeholders, such as mobile phone operators. Moreover, the national regulatory authority creates a regulatory environment that promotes fair practices and competition to raise revenues to support a business-enabling environment's key pillars as infrastructure.Keywords: actor-network theory (ANT), cashless future, Developing countries, Ghana, Mobile Money
Procedia PDF Downloads 1382721 Virtual Science Hub: An Open Source Platform to Enrich Science Teaching
Authors: Enrique Barra, Aldo Gordillo, Juan Quemada
Abstract:
This paper presents the Virtual Science Hub platform. It is an open source platform that combines a social network, an e-learning authoring tool, a video conference service and a learning object repository for science teaching enrichment. These four main functionalities fit very well together. The platform was released in April 2012 and since then it has not stopped growing. Finally we present the results of the surveys conducted and the statistics gathered to validate this approach.Keywords: e-learning, platform, authoring tool, science teaching, educational sciences
Procedia PDF Downloads 3972720 Phenotypic Characterization of Dental Pulp Stem Cells Isolated from Irreversible Pulpitis with Dental Pulp Stem Cells from Impacted Teeth
Authors: Soumya S., Manju Nidagodu Jayakumar, Vellore Kannan Gopinath
Abstract:
Dental pulp inflammation resulting from dental caries often leads to a pathologic condition known as irreversible pulpitis and the currently managed by root canal treatment. Extirpation of the entire pulp tissue is done during this procedure, and the canal space is filled with synthetic materials. Recent studies in the stem cell biology state that some portion of the irreversibly inflamed pulp tissue could be viable with progenitor cells, having the properties similar to that of Mesenchymal stem cells. Hence, we aim to isolate Dental Pulp Stem Cells (DPSCs) from patients diagnosed with severe irreversible pulpitis and characterize the cells for the MSC specific markers. The pulp tissue was collected from the dental clinic and subjected to collagenase/dispase digestion. The isolated cells were expanded in culture, and the phenotypic characterization was done using flow cytometry. MSC specific markers such as CD-90, CD-73, and CD-105 were analysed along with negative markers such as CD-14 and CD-45. The isolated cells expressed positive expression for CD markers with CD90 and CD105 ( > 95%) and CD73 (19%). The cells did not express the negative markers CD-14 and CD-45. The commercially available DPSCs from vital extracted teeth, preferably molar/wisdom teeth with large pulp cavity or incomplete root growth in young patients (aged 15-30 years) showed more than 90% expression for all the CD markers such as CD-90, 73 and 105, whereas negative for CD-14 and CD-45. The DPSCs isolated from inflamed pulp tissue showed a less expression for CD-73 compared to the commercially available DPSCs whereas, as the other two markers were found to show similar percentage of positive expression. This could be attributed to the fact that the pulp population is very heterogeneous and we used the pooled tissue from different patients. Hence the phenotypic characterization and comparison with the commercially available DPSCs proved that the inflamed pulp tissue is a good source of MSC like cells which can be utilized further for regenerative application.Keywords: collagenase/dispase, dental pulp stem cells, flow cytometry, irreversible pulpitis
Procedia PDF Downloads 2522719 The Behavior of Masonry Wall Constructed Using Biaxial Interlocking Concrete Block, Solid Concrete Block and Cement Sand Brick Subjected to the Compressive Load
Authors: Fauziah Aziz, Mohd.fadzil Arshad, Hazrina Mansor, Sedat Kömürcü
Abstract:
Masonry is an isotropic and heterogeneous material due to the presence of the different components within the assembly process. Normally the mortar plays a significant role in the compressive behavior of the traditional masonry structures. Biaxial interlocking concrete block is a masonry unit that comes out with the interlocking concept. This masonry unit can improve the quality of the construction process, reduce the cost of labor, reduce high skill workmanship, and speeding the construction time. Normally, the interlocking concrete block masonry unit in the market place was designed in a way interlocking concept only either x or y-axis, shorter in length, and low compressive strength value. However, the biaxial interlocking concrete block is a dry-stack concept being introduced in this research, offered the specialty compared to the normal interlocking concrete available in the market place due to its length and the geometry of the groove and tongue. This material can be used as a non-load bearing wall, or load-bearing wall depends on the application of the masonry. But, there is a lack of technical data that was produced before. This paper presents a finding on the compressive resistance of the biaxial interlocking concrete block masonry wall compared to the other traditional masonry walls. Two series of biaxial interlocking concrete block masonry walls, namely M1 and M2, a series of solid concrete block and cement sand brick walls M3, and M4 have tested the compressive resistance. M1 is the masonry wall of a hollow biaxial interlocking concrete block meanwhile; M2 is the grouted masonry wall, M3 is a solid concrete block masonry wall, and M4 is a cement sand brick masonry wall. All the samples were tested under static compressive load. The results examine that M2 is higher in compressive resistance compared to the M1, M3, and M4. It shows that the compressive strength of the concrete masonry units plays a significant role in the capacity of the masonry wall.Keywords: interlocking concrete block, compressive resistance, concrete masonry unit, masonry
Procedia PDF Downloads 1672718 Academic Staff’s Perception and Willingness to Participate in Collaborative Research: Implication for Development in Sub-Saharan Africa
Authors: Ademola Ibukunolu Atanda
Abstract:
Research undertakings are meant to proffer solutions to issues and challenges in society. This justifies the need for research in ivory towers. Multinational and non-governmental organisations, as well as foundations, commit financial resources to support research endeavours. In recent times, the direction and dimension of research undertaking encourage collaborations, whereby experts from different disciplines or specializations would bring their expertise in addressing any identified problem, whether in humanities or sciences. However, the extent to which collaborative research undertakings are perceived and embraced by academic staff would determine the impact collaborative research would have on society. To this end, this study investigated academic staff’s perception and willingness to be involved in collaborative research for the purpose of proffering solutions to societal problems. The study adopted a descriptive research design. The population comprised academic staff in southern Nigeria. The sample was drawn through a convenient sampling technique. The data were collected using a questionnaire titled “Perception and Willingness to Participate in Collaborative Research Questionnaire (PWPCRQ)’ using Google Forms. Data collected were analyzed using descriptive statistics of simple percentages, mean and charts. The findings showed that Academic Staff’s readiness to participate in collaborative research is to a great extent (89%) and they participate in collaborative research very often (51%). The Academic Staff was involved more in collaboration research among their colleagues within their universities (1.98) than participation in inter-disciplines collaboration (1.47) with their colleagues outside Nigeria. Collaborative research was perceived to impact on development (2.5). Collaborative research offers the following benefits to members’ aggregation of views, the building of an extensive network of contacts, enhancement of sharing of skills, facilitation of tackling complex problems, increased visibility of research network and citations and promotion of funding opportunities. The study concluded that Academic staff in universities in the South-West of Nigeria participate in collaborative research but with their colleagues within Nigeria rather than outside the country. Based on the findings, it was recommended that the management of universities in South-West Nigeria should encourage collaborative research with some incentives.Keywords: collaboration, research, development, participation
Procedia PDF Downloads 632717 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives
Authors: Chao Wang
Abstract:
Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.Keywords: sebum layer, topical and similarity, interactive technology, image narrative
Procedia PDF Downloads 3892716 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations
Authors: Zhao Gao, Eran Edirisinghe
Abstract:
The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.Keywords: RNN, GAN, NLP, facial composition, criminal investigation
Procedia PDF Downloads 1642715 A Graph Theoretic Algorithm for Bandwidth Improvement in Computer Networks
Authors: Mehmet Karaata
Abstract:
Given two distinct vertices (nodes) source s and target t of a graph G = (V, E), the two node-disjoint paths problem is to identify two node-disjoint paths between s ∈ V and t ∈ V . Two paths are node-disjoint if they have no common intermediate vertices. In this paper, we present an algorithm with O(m)-time complexity for finding two node-disjoint paths between s and t in arbitrary graphs where m is the number of edges. The proposed algorithm has a wide range of applications in ensuring reliability and security of sensor, mobile and fixed communication networks.Keywords: disjoint paths, distributed systems, fault-tolerance, network routing, security
Procedia PDF Downloads 4442714 Computational Team Dynamics in Student New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
Teamwork is an extremely effective pedagogical tool in engineering education. New Product Development (NPD) has been an effective strategy of companies to streamline and bring innovative products and solutions to customers. Thus, Engineering curriculum in many schools, some collaboratively with business schools, have brought NPD into the curriculum at the graduate level. Teamwork is invariably used during instruction, where students work in teams to come up with new products and solutions. There is a significant emphasis of grade on the semester long teamwork for it to be taken seriously by students. As the students work in teams and go through this process to develop the new product prototypes, their effectiveness and learning to a great extent depends on how they function as a team and go through the creative process, come together, and work towards the common goal. A core attribute of a successful NPD team is their creativity and innovation. The team needs to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas resulting in a POC (proof-of-concept) implementation or a prototype of the product. The simultaneous requirement of teams to be creative and at the same time also converge and work together imposes different types of tensions in their team interactions. These ideational tensions / conflicts and sometimes relational tensions / conflicts are inevitable. Effective teams will have to deal with the Team dynamics and manage it to be resilient enough and yet be creative. This research paper provides a computational analysis of the teams’ communication that is reflective of the team dynamics, and through a superimposition of latent semantic analysis with social network analysis, provides a computational methodology of arriving at patterns of visual interaction. These team interaction patterns have clear correlations to the team dynamics and provide insights into the functioning and thus the effectiveness of the teams. 23 student NPD teams over 2 years of a course on Managing NPD that has a blend of engineering and business school students is considered, and the results are presented. It is also correlated with the teams’ detailed and tailored individual and group feedback and self-reflection and evaluation questionnaire.Keywords: team dynamics, social network analysis, team interaction patterns, new product development teamwork, NPD teams
Procedia PDF Downloads 117