Search results for: input randomization
1711 Analysis of DNA from Fired Cartridge Casings
Authors: S. Mawlood, L. Denanny, N. Watson, B. Pickard
Abstract:
DNA analysis has been widely accepted as providing valuable evidence concerning the identity of the source of biological traces. Our work has showed that DNA samples can survive on cartridges even after firing. The study also raised the possibility of determining other information such as the age of the donor. Such information may be invaluable in certain cases where spent cartridges from automatic weapons are left behind at the scene of a crime. In spite of the nature of touch evidence and exposure to high chamber temperatures during shooting, we were still capable to retrieve enough DNA for profile typing. In order to estimate age of contributor, DNA methylation levels were analyzed using EpiTect system for retrieved DNA. However, results were not conclusive, due to low amount of input DNA.Keywords: DNA profile, DNA Methylation, fired cartridge, touch sample
Procedia PDF Downloads 4531710 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery
Authors: Diego Liberati
Abstract:
Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input
Procedia PDF Downloads 321709 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 1961708 Sustainable Geographic Information System-Based Map for Suitable Landfill Sites in Aley and Chouf, Lebanon
Authors: Allaw Kamel, Bazzi Hasan
Abstract:
Municipal solid waste (MSW) generation is among the most significant sources which threaten the global environmental health. Solid Waste Management has been an important environmental problem in developing countries because of the difficulties in finding sustainable solutions for solid wastes. Therefore, more efforts are needed to be implemented to overcome this problem. Lebanon has suffered a severe solid waste management problem in 2015, and a new landfill site was proposed to solve the existing problem. The study aims to identify and locate the most suitable area to construct a landfill taking into consideration the sustainable development to overcome the present situation and protect the future demands. Throughout the article, a landfill site selection methodology was discussed using Geographic Information System (GIS) and Multi Criteria Decision Analysis (MCDA). Several environmental, economic and social factors were taken as criterion for selection of a landfill. Soil, geology, and LUC (Land Use and Land Cover) indices with the Sustainable Development Index were main inputs to create the final map of Environmentally Sensitive Area (ESA) for landfill site. Different factors were determined to define each index. Input data of each factor was managed, visualized and analyzed using GIS. GIS was used as an important tool to identify suitable areas for landfill. Spatial Analysis (SA), Analysis and Management GIS tools were implemented to produce input maps capable of identifying suitable areas related to each index. Weight has been assigned to each factor in the same index, and the main weights were assigned to each index used. The combination of the different indices map generates the final output map of ESA. The output map was reclassified into three suitability classes of low, moderate, and high suitability. Results showed different locations suitable for the construction of a landfill. Results also reflected the importance of GIS and MCDA in helping decision makers finding a solution of solid wastes by a sanitary landfill.Keywords: sustainable development, landfill, municipal solid waste (MSW), geographic information system (GIS), multi criteria decision analysis (MCDA), environmentally sensitive area (ESA)
Procedia PDF Downloads 1501707 Literary Theatre and Embodied Theatre: A Practice-Based Research in Exploring the Authorship of a Performance
Authors: Rahul Bishnoi
Abstract:
Theatre, as Ann Ubersfld calls it, is a paradox. At once, it is both a literary work and a physical representation. Theatre as a text is eternal, reproducible, and identical while as a performance, theatre is momentary and never identical to the previous performances. In this dual existence of theatre, who is the author? Is the author the playwright who writes the dramatic text, or the director who orchestrates the performance, or the actor who embodies the text? From the poststructuralist lens of Barthes, the author is dead. Barthes’ argument of discrete temporality, i.e. the author is the before, and the text is the after, does not hold true for theatre. A published literary work is written, edited, printed, distributed and then gets consumed by the reader. On the other hand, theatrical production is immediate; an actor performs and the audience witnesses it instantaneously. Time, so to speak, does not separate the author, the text, and the reader anymore. The question of authorship gets further complicated in Augusto Boal’s “Theatre of the Oppressed” movement where the audience is a direct participant like the actors in the performance. In this research, through an experimental performance, the duality of theatre is explored with the authorship discourse. And the conventional definition of authorship is subjected to additional complexity by erasing the distinction between an actor and the audience. The design/methodology of the experimental performance is as follows: The audience will be asked to produce a text under an anonymous virtual alias. The text, as it is being produced, will be read and performed by the actor. The audience who are also collectively “authoring” the text, will watch this performance and write further until everyone has contributed with one input each. The cycle of writing, reading, performing, witnessing, and writing will continue until the end. The intention is to create a dynamic system of writing/reading with the embodiment of the text through the actor. The actor is giving up the power to the audience to write the spoken word, stage instruction and direction while still keeping the agency of interpreting that input and performing in the chosen manner. This rapid conversation between the actor and the audience also creates a conversion of authorship. The main conclusion of this study is a perspective on the nature of dynamic authorship of theatre containing a critical enquiry of the collaboratively produced text, an individually performed act, and a collectively witnessed event. Using practice as a methodology, this paper contests the poststructuralist notion of the author as merely a ‘scriptor’ and breaks it further by involving the audience in the authorship as well.Keywords: practice based research, performance studies, post-humanism, Avant-garde art, theatre
Procedia PDF Downloads 1111706 MIMO PID Controller of a Power Plant Boiler–Turbine Unit
Authors: N. Ben-Mahmoud, M. Elfandi, A. Shallof
Abstract:
This paper presents a methodology to design multivariable PID controllers for multi-input and multi-output systems. The proposed control strategy, which is centralized, combines of PID controllers. The proportional gains in the P controllers act as tuning parameters of (SISO) in order to modify the behavior of the loops almost independently. The design procedure consists of three steps: first, an ideal decoupler including integral action is determined. Second, the decoupler is approximated with PID controllers. Third, the proportional gains are tuned to achieve the specified performance. The proposed method is applied to representative processes.Keywords: boiler turbine, MIMO, PID controller, control by decoupling, anti wind-up techniques
Procedia PDF Downloads 3281705 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 791704 Efficient Filtering of Graph Based Data Using Graph Partitioning
Authors: Nileshkumar Vaishnav, Aditya Tatu
Abstract:
An algebraic framework for processing graph signals axiomatically designates the graph adjacency matrix as the shift operator. In this setup, we often encounter a problem wherein we know the filtered output and the filter coefficients, and need to find out the input graph signal. Solution to this problem using direct approach requires O(N3) operations, where N is the number of vertices in graph. In this paper, we adapt the spectral graph partitioning method for partitioning of graphs and use it to reduce the computational cost of the filtering problem. We use the example of denoising of the temperature data to illustrate the efficacy of the approach.Keywords: graph signal processing, graph partitioning, inverse filtering on graphs, algebraic signal processing
Procedia PDF Downloads 3131703 Effect of Different Phosphorus Levels on Vegetative Growth of Maize Variety
Authors: Tegene Nigussie
Abstract:
Introduction: Maize is the most domesticated of all the field crops. Wild maize has not been found to date and there has been much speculation on its origin. Regardless of the validity of different theories, it is generally agreed that the center of origin of maize is Central America, primarily Mexico and the Caribbean. Maize in Africa is of a recent introduction although data suggest that it was present in Nigeria even before Columbus voyages. After being taken to Europe in 1493, maize was introduced to Africa and distributed (spread through the continent by different routes. Maize is an important cereal crop in Ethiopia in general, it is the primarily stable food, and rural households show strong preference. For human food, the important constituents of grain are carbohydrates (starch and sugars), protein, fat or oil (in the embryo) and minerals. About 75 percent of the kernel is starch, a range of 60.80 percent but low protein content (8-15%). In Ethiopia, the introduction of modern farming techniques appears to be a priority. However, the adoption of modern inputs by peasant farmers is found to be very slow, for example, the adoption rate of fertilizer, an input that is relatively adopted, is very slow. The difference in socio-economic factors lay behind the low rate of technological adoption, including price & marketing input. Objective: The aim of the study is to determine the optimum application rate or level of different phosphorus fertilizers for the vegetative growth of maize and to identify the effect of different phosphorus rates on the growth and development of maize. Methods: The vegetative parameter (above ground) measurement from five plants randomly sampled from the middle rows of each plot. Results: The interaction of nitrogen and maize variety showed a significant at (p<0.01) effect on plant height, with the application of 60kg/ha and BH140 maize variety in combination and root length with the application of 60kg/ha of nitrogen and BH140 variety of maize. The highest mean (12.33) of the number of leaves per plant and mean (7.1) of the number of nodes per plant can be used as an alternative for better vegetative growth of maize. Conclusion and Recommendation: Maize is one of the popular and cultivated crops in Ethiopia. This study was conducted to investigate the best dosage of phosphorus for vegetative growth, yield, and better quality of maize variety and to recommend a level of phosphorus rate and the best variety adaptable to the specific soil condition or area.Keywords: leaf, carbohydrate protein, adoption, sugar
Procedia PDF Downloads 161702 Hydrotherapy with Dual Sensory Impairment (Dsi)-Deaf and Blind
Authors: M. Warburton
Abstract:
Background: Case study examining hydrotherapy for a person with DSI. A 46 year-old lady completely deaf and blind post congenital rubella syndrome. Touch becomes the primary information gathering sense to optimise function in life. Communication is achieved via tactile finger spelling and signals onto her hand and skin. Hydrotherapy may provide a suitable mobility environment and somato-sensory input to people, and especially DSI persons. Buoyancy, warmth, hydrostatic pressure, viscosity and turbulence are elements of hydrotherapy that may offer a DSI person somato-sensory input to stimulate the mechanoreceptors, thermoreceptors and proprioceptors and offer a unique hydro-therapeutic environment. Purpose: The purpose of this case study was to establish what measurable benefits could be achieved from hydrotherapy with a DSI person. Methods: Hydrotherapy was provided for 8-weeks, 2 x week, 35-minute session duration. Pool temperature 32.5 degrees centigrade. Pool length 25-metres. Each session consisted of mobility encouragement and supervision, and activities to stimulate the somato-sensory system utilising aquatic properties of buoyancy, turbulence, viscosity, warmth and hydrostatic pressure. Somato-sensory activities focused on stimulating touch and tactile exploration including objects of various shape, size, weight, contour, texture, elasticity, pliability, softness and hardness. Outcomes were measured by the Goal Attainment Scale (GAS) and included mobility distance, attendance, and timed tactile responsiveness to varying objects. Results: Mobility distance and attendance exceeded baseline expectations. Timed tactile responsiveness to varying objects also changed positively from baseline. Average scale scores were 1.00 with an overall GAS t-score of 63.69. Conclusions: Hydrotherapy can be a quantifiable physio-therapeutic option for persons with DSI. It provides a relatively safe environment for mobility and allows the somato-sensory system to be fully engaged - important for the DSI population. Implications: Hydrotherapy can be a measurable therapeutic option for a DSI person. Physiotherapists should consider hydrotherapy for DSI people. Hydrotherapy can offer unique physical properties for the DSI population not available on land.Keywords: chronic, disability, disease, rehabilitation
Procedia PDF Downloads 3591701 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)
Authors: Ahmed E. Hodaib, Mohamed A. Hashem
Abstract:
In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization
Procedia PDF Downloads 2571700 Finite Element Simulation of Four Point Bending of Laminated Veneer Lumber (LVL) Arch
Authors: Eliska Smidova, Petr Kabele
Abstract:
This paper describes non-linear finite element simulation of laminated veneer lumber (LVL) under tensile and shear loads that induce cracking along fibers. For this purpose, we use 2D homogeneous orthotropic constitutive model of tensile and shear fracture in timber that has been recently developed and implemented into ATENA® finite element software by the authors. The model captures (i) material orthotropy for small deformations in both linear and non-linear range, (ii) elastic behavior until anisotropic failure criterion is fulfilled, (iii) inelastic behavior after failure criterion is satisfied, (iv) different post-failure response for cracks along and across the grain, (v) unloading/reloading behavior. The post-cracking response is treated by fixed smeared crack model where Reinhardt-Hordijk function is used. The model requires in total 14 input parameters that can be obtained from standard tests, off-axis test results and iterative numerical simulation of compact tension (CT) or compact tension-shear (CTS) test. New engineered timber composites, such as laminated veneer lumber (LVL), offer improved structural parameters compared to sawn timber. LVL is manufactured by laminating 3 mm thick wood veneers aligned in one direction using water-resistant adhesives (e.g. polyurethane). Thus, 3 main grain directions, namely longitudinal (L), tangential (T), and radial (R), are observed within the layered LVL product. The core of this work consists in 3 numerical simulations of experiments where Radiata Pine LVL and Yellow Poplar LVL were involved. The first analysis deals with calibration and validation of the proposed model through off-axis tensile test (at a load-grain angle of 0°, 10°, 45°, and 90°) and CTS test (at a load-grain angle of 30°, 60°, and 90°), both of which were conducted for Radiata Pine LVL. The second finite element simulation reproduces load-CMOD curve of compact tension (CT) test of Yellow Poplar with the aim of obtaining cohesive law parameters to be used as an input in the third finite element analysis. That is four point bending test of small-size arch of 780 mm span that is made of Yellow Poplar LVL. The arch is designed with a through crack between two middle layers in the crown. Curved laminated beams are exposed to high radial tensile stress compared to timber strength in radial tension in the crown area. Let us note that in this case the latter parameter stands for tensile strength in perpendicular direction with respect to the grain. Standard tests deliver most of the relevant input data whereas traction-separation law for crack along the grain can be obtained partly by inverse analysis of compact tension (CT) test or compact tension-shear test (CTS). The initial crack was modeled as a narrow gap separating two layers in the middle the arch crown. Calculated load-deflection curve is in good agreement with the experimental ones. Furthermore, crack pattern given by numerical simulation coincides with the most important observed crack paths.Keywords: compact tension (CT) test, compact tension shear (CTS) test, fixed smeared crack model, four point bending test, laminated arch, laminated veneer lumber LVL, off-axis test, orthotropic elasticity, orthotropic fracture criterion, Radiata Pine LVL, traction-separation law, yellow poplar LVL, 2D constitutive model
Procedia PDF Downloads 2901699 Development of Coir Reinforced Composite for Automotive Parts Application
Authors: Okpala Charles Chikwendu, Ezeanyim Okechukwu Chiedu, Onukwuli Somto Kenneth
Abstract:
The demand for lightweight and fuel-efficient automobiles has led to the use of fiber-reinforced polymer composites in place of traditional metal parts. Coir, a natural fiber, offers qualities such as low cost, good tensile strength, and biodegradability, making it a potential filler material for automotive components. However, poor interfacial adhesion between coir and polymeric matrices has been a challenge. To address poor interfacial adhesion with polymeric matrices due to their moisture content and method of preparation, the extracted coir was chemically treated using NaOH. To develop a side view mirror encasement by investigating the mechanical effect of fiber percentage composition, fiber length and percentage composition of Epoxy in a coir fiber reinforced composite, polyester was adopted as the resin for the mold, while that of the product is Epoxy. Coir served as the filler material for the product. Specimens with varied compositions of fiber loading (15, 30 and 45) %, length (10, 15, 20, 30 and 45) mm, and (55, 70, 85) % weight of epoxy resin were fabricated using hand lay-up technique, while those specimens were later subjected to mechanical tests (Tensile, Flexural and Impact test). The results of the mechanical test showed that the optimal solution for the input factors is coir at 45%, epoxy at 54.543%, and 45mm coir length, which was used for the development of a vehicle’s side view mirror encasement. The optimal solutions for the response parameters are 49.333 Mpa for tensile strength, flexural for 57.118 Mpa, impact strength for 34.787 KJ/M2, young modulus for 4.788 GPa, stress for 4.534 KN, and 20.483 mm for strain. The models that were developed using Design Expert software revealed that the input factors can achieve the response parameters in the system with 94% desirability. The study showed that coir is quite durable for filler material in an epoxy composite for automobile applications and that fiber loading and length have a significant effect on the mechanical behavior of coir fiber-reinforced epoxy composites. The coir's low density, considerable tensile strength, and bio-degradability contribute to its eco-friendliness and potential for reducing the environmental hazards of synthetic automotive components.Keywords: coir, composite, coir fiber, coconut husk, polymer, automobile, mechanical test
Procedia PDF Downloads 641698 Edge Detection in Low Contrast Images
Authors: Koushlendra Kumar Singh, Manish Kumar Bajpai, Rajesh K. Pandey
Abstract:
The edges of low contrast images are not clearly distinguishable to the human eye. It is difficult to find the edges and boundaries in it. The present work encompasses a new approach for low contrast images. The Chebyshev polynomial based fractional order filter has been used for filtering operation on an image. The preprocessing has been performed by this filter on the input image. Laplacian of Gaussian method has been applied on preprocessed image for edge detection. The algorithm has been tested on two test images.Keywords: low contrast image, fractional order differentiator, Laplacian of Gaussian (LoG) method, chebyshev polynomial
Procedia PDF Downloads 6381697 Automatic Intelligent Analysis of Malware Behaviour
Authors: Hermann Dornhackl, Konstantin Kadletz, Robert Luh, Paul Tavolato
Abstract:
In this paper we describe the use of formal methods to model malware behaviour. The modelling of harmful behaviour rests upon syntactic structures that represent malicious procedures inside malware. The malicious activities are modelled by a formal grammar, where API calls’ components are the terminals and the set of API calls used in combination to achieve a goal are designated non-terminals. The combination of different non-terminals in various ways and tiers make up the attack vectors that are used by harmful software. Based on these syntactic structures a parser can be generated which takes execution traces as input for pattern recognition.Keywords: malware behaviour, modelling, parsing, search, pattern matching
Procedia PDF Downloads 3341696 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1061695 OCR/ICR Text Recognition Using ABBYY FineReader as an Example Text
Authors: A. R. Bagirzade, A. Sh. Najafova, S. M. Yessirkepova, E. S. Albert
Abstract:
This article describes a text recognition method based on Optical Character Recognition (OCR). The features of the OCR method were examined using the ABBYY FineReader program. It describes automatic text recognition in images. OCR is necessary because optical input devices can only transmit raster graphics as a result. Text recognition describes the task of recognizing letters shown as such, to identify and assign them an assigned numerical value in accordance with the usual text encoding (ASCII, Unicode). The peculiarity of this study conducted by the authors using the example of the ABBYY FineReader, was confirmed and shown in practice, the improvement of digital text recognition platforms developed by Electronic Publication.Keywords: ABBYY FineReader system, algorithm symbol recognition, OCR/ICR techniques, recognition technologies
Procedia PDF Downloads 1691694 Ecophysiological Features of Acanthosicyos horridus (!Nara) to Survive the Namib Desert
Authors: Jacques M. Berner, Monja Gerber, Gillian L. Maggs-Kolling, Stuart J. Piketh
Abstract:
The enigmatic melon species, Acanthosicyos horridus Welw. ex Hook. f., locally known as !nara, is endemic to the hyper-arid Namib Desert, where it thrives in sandy dune areas and dry river banks. The Namib Desert is characterized by extreme weather conditions which include high temperatures, very low rainfall, and extremely dry air. Plant and animals that have made the Namib Dessert their home are dependent on non-rainfall water inputs, like fog, dew and water vapor, for survival. Fog is believed to be the most important non-rainfall water input for most of the coastal Namib Desert and is a life line to many Namib plants and animals. It is commonly assumed that the !nara plant is adapted and dependent upon coastal fog events. The !nara plant shares many comparable adaptive features with other organisms that are known to exploit fog as a source of moisture. These include groove-like structures on the stems and the cone-like structures of thorns. These structures are believed to be the driving forces behind directional water flow that allow plants to take advantage of fog events. The !nara-fog interaction was investigated in this study to determine the dependence of !nara on these fog events, as it would illustrate strategies to benefit from non-rainfall water inputs. The direct water uptake capacity of !nara shoots was investigated through absorption tests. Furthermore, the movement and behavior of fluorescent water droplets on a !nara stem were investigated through time-lapse macrophotography. The shoot water potential was measured to investigate the effect of fog on the water status of !nara stems. These tests were used to determine whether the morphology of !nara has evolved to exploit fog as a non-rainfall water input and whether the !nara plant has adapted physiologically in response to fog. Chlorophyll a fluorescence was used to compare the photochemical efficiency of !nara plants on days with fog events to that on non-foggy days. The results indicate that !nara plants do have the ability to take advantage of fog events as commonly believed. However, the !nara plant did not exhibit visible signs of drought stress and this, together with the strong shoot water potential, indicates that these plants are reliant on permanent underground water sources. Chlorophyll a fluorescence data indicated that temperature stress and wind were some of the main abiotic factors influencing the plants’ overall vitality.Keywords: Acanthosicyos horridus, chlorophyll a fluorescence, fog, foliar absorption, !nara
Procedia PDF Downloads 1591693 Quantum Graph Approach for Energy and Information Transfer through Networks of Cables
Authors: Mubarack Ahmed, Gabriele Gradoni, Stephen C. Creagh, Gregor Tanner
Abstract:
High-frequency cables commonly connect modern devices and sensors. Interestingly, the proportion of electric components is rising fast in an attempt to achieve lighter and greener devices. Modelling the propagation of signals through these cable networks in the presence of parameter uncertainty is a daunting task. In this work, we study the response of high-frequency cable networks using both Transmission Line and Quantum Graph (QG) theories. We have successfully compared the two theories in terms of reflection spectra using measurements on real, lossy cables. We have derived a generalisation of the vertex scattering matrix to include non-uniform networks – networks of cables with different characteristic impedances and propagation constants. The QG model implicitly takes into account the pseudo-chaotic behavior, at the vertices, of the propagating electric signal. We have successfully compared the asymptotic growth of eigenvalues of the Laplacian with the predictions of Weyl law. We investigate the nearest-neighbour level-spacing distribution of the resonances and compare our results with the predictions of Random Matrix Theory (RMT). To achieve this, we will compare our graphs with the generalisation of Wigner distribution for open systems. The problem of scattering from networks of cables can also provide an analogue model for wireless communication in highly reverberant environments. In this context, we provide a preliminary analysis of the statistics of communication capacity for communication across cable networks, whose eventual aim is to enable detailed laboratory testing of information transfer rates using software defined radio. We specialise this analysis in particular for the case of MIMO (Multiple-Input Multiple-Output) protocols. We have successfully validated our QG model with both TL model and laboratory measurements. The growth of Eigenvalues compares well with Weyl’s law and the level-spacing distribution agrees so well RMT predictions. The results we achieved in the MIMO application compares favourably with the prediction of a parallel on-going research (sponsored by NEMF21.)Keywords: eigenvalues, multiple-input multiple-output, quantum graph, random matrix theory, transmission line
Procedia PDF Downloads 1741692 The Relevance of Intellectual Capital: An Analysis of Spanish Universities
Authors: Yolanda Ramirez, Angel Tejada, Agustin Baidez
Abstract:
In recent years, the intellectual capital reporting in higher education institutions has been acquiring progressive importance worldwide. Intellectual capital approaches becomes critical at universities, mainly due to the fact that knowledge is the main output as well as input in these institutions. Universities produce knowledge, either through scientific and technical research (the results of investigation, publications, etc.) or through teaching (students trained and productive relationships with their stakeholders). The purpose of the present paper is to identify the intangible elements about which university stakeholders demand most information. The results of a study done at Spanish universities are used to see which groups of universities have stakeholders who are more proactive to the disclosure of intellectual capital.Keywords: intellectual capital, universities, Spain, cluster analysis
Procedia PDF Downloads 5101691 Cardiac Rehabilitation Program and Health-Related Quality of Life; A Randomized Control Trial
Authors: Zia Ul Haq, Saleem Muhammad, Naeem Ullah, Abbas Shah, Abdullah Shah
Abstract:
Pakistan being the developing country is facing double burden of communicable and non-communicable disease. The aspect of secondary prevention of ischemic heart disease in developing countries is the dire need for public health specialists, clinicians and policy makers. There is some evidence that psychotherapeutic measures, including psychotherapy, recreation, exercise and stress management training have positive impact on secondary prevention of cardiovascular diseases but there are some contradictory findings as well. Cardiac rehabilitation program (CRP) has not yet fully implemented in Pakistan. Psychological, physical and specific health-related quality of life (HRQoL) outcomes needs assessment with respect to its practicality, effectiveness, and success. Objectives: To determine the effect of cardiac rehabilitation program (CRP) on the health-related quality of life (HRQoL) measures of post MI patients compared to the usual care. Hypothesis: Post MI patients who receive the interventions (CRP) will have better HRQoL as compared to those who receive the usual cares. Methods: The randomized control trial was conducted at a Cardiac Rehabilitation Unit of Lady Reading Hospital (LRH), Peshawar. LRH is the biggest hospital of the Province Khyber Pakhtunkhwa (KP). A total 206 participants who had recent first myocardial infarction were inducted in the study. Participants were randomly allocated into two group i.e. usual care group (UCG) and cardiac rehabilitation group (CRG) by permuted-block randomization (PBR) method. CRP was conducted in CRG in two phases. Three HRQoL outcomes i.e. general health questionnaire (GHQ), self-rated health (SRH) and MacNew quality of life after myocardial infarction (MacNew QLMI) were assessed at baseline and follow-up visits among both groups. Data were entered and analyzed by appropriate statistical test in STATA version 12. Results: A total of 195 participants were assessed at the follow-up period due to lost-to-follow-up. The mean age of the participants was 53.66 + 8.3 years. Males were dominant in both groups i.e. 150 (76.92%). Regarding educational status, majority of the participants were illiterate in both groups i.e. 128 (65.64%). Surprisingly, there were 139 (71.28%) who were non-smoker on the whole. The comorbid status was positive in 120 (61.54%) among all the patients. The SRH at follow-up among UCG and CRG was 4.06 (95% CI: 3.93, 4.19) and 2.36 (95% CI: 2.2, 2.52) respectively (p<0.001). GHQ at the follow-up of UCG and CRG was 20.91 (95% CI: 18.83, 21.97) and 7.43 (95% CI: 6.59, 8.27) respectively (p<0.001). The MacNew QLMI at follow-up of UCG and CRG was 3.82 (95% CI: 3.7, 3.94) and 5.62 (95% CI: 5.5, 5.74) respectively (p<0.001). All the HRQoL measures showed strongly significant improvement in the CRG at follow-up period. Conclusion: HRQOL improved in post MI patients after comprehensive CRP. Education of the patients and their supervision is needed when they are involved in their rehabilitation activities. It is concluded that establishing CRP in cardiac units, recruiting post-discharged MI patients and offering them CRP does not impose high costs and can result in significant improvement in HRQoL measures. Trial registration no: ACTRN12617000832370Keywords: cardiovascular diseases, cardiac rehabilitation, health-related quality of life, HRQoL, myocardial infarction, quality of life, QoL, rehabilitation, randomized control trial
Procedia PDF Downloads 2281690 Application of Artificial Neural Network to Prediction of Feature Academic Performance of Students
Authors: J. K. Alhassan, C. S. Actsu
Abstract:
This study is on the prediction of feature performance of undergraduate students with Artificial Neural Networks (ANN). With the growing decline in the quality academic performance of undergraduate students, it has become essential to predict the students’ feature academic performance early in their courses of first and second years and to take the necessary precautions using such prediction-based information. The feed forward multilayer neural network model was used to train and develop a network and the test carried out with some of the input variables. A result of 80% accuracy was obtained from the test which was carried out, with an average error of 0.009781.Keywords: academic performance, artificial neural network, prediction, students
Procedia PDF Downloads 4701689 Fuzzy Logic in Detecting Children with Behavioral Disorders
Authors: David G. Maxinez, Andrés Ferreyra Ramírez, Liliana Castillo Sánchez, Nancy Adán Mendoza, Carlos Aviles Cruz
Abstract:
This research describes the use of fuzzy logic in detection, assessment, analysis and evaluation of children with behavioral disorders. It shows how to acquire and analyze ambiguous, vague and full of uncertainty data coming from the input variables to get an accurate assessment result for each of the typologies presented by children with behavior problems. Behavior disorders analyzed in this paper are: hyperactivity (H), attention deficit with hyperactivity (DAH), conduct disorder (TD) and attention deficit (AD).Keywords: alteration, behavior, centroid, detection, disorders, economic, fuzzy logic, hyperactivity, impulsivity, social
Procedia PDF Downloads 5651688 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia
Abstract:
Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines
Procedia PDF Downloads 2341687 Design and Implementation of an Image Based System to Enhance the Security of ATM
Authors: Seyed Nima Tayarani Bathaie
Abstract:
In this paper, an image-receiving system was designed and implemented through optimization of object detection algorithms using Haar features. This optimized algorithm served as face and eye detection separately. Then, cascading them led to a clear image of the user. Utilization of this feature brought about higher security by preventing fraud. This attribute results from the fact that services will be given to the user on condition that a clear image of his face has already been captured which would exclude the inappropriate person. In order to expedite processing and eliminating unnecessary ones, the input image was compressed, a motion detection function was included in the program, and detection window size was confined.Keywords: face detection algorithm, Haar features, security of ATM
Procedia PDF Downloads 4201686 Low Power, Highly Linear, Wideband LNA in Wireless SOC
Authors: Amir Mahdavi
Abstract:
In this paper a highly linear CMOS low noise amplifier (LNA) for ultra-wideband (UWB) applications is proposed. The proposed LNA uses a linearization technique to improve second and third-order intercept points (IIP3). The linearity is cured by repealing the common-mode section of all intermodulation components from the cascade topology current with optimization of biasing current use symmetrical and asymmetrical circuits for biasing. Simulation results show that maximum gain and noise figure are 6.9dB and 3.03-4.1dB over a 3.1–10.6 GHz, respectively. Power consumption of the LNA core and IIP3 are 2.64 mW and +4.9dBm respectively. The wideband input impedance matching of LNA is obtained by employing a degenerating inductor (|S11|<-9.1 dB). The circuit proposed UWB LNA is implemented using 0.18 μm based CMOS technology.Keywords: highly linear LNA, low-power LNA, optimal bias techniques
Procedia PDF Downloads 2811685 Economic Analysis of Endogenous Growth Model with ICT Capital
Authors: Shoji Katagiri, Hugang Han
Abstract:
This paper clarifies the role of ICT capital in Economic Growth. Albeit ICT remarkably contributes to economic growth, there are few studies on ICT capital in ICT sector from theoretical point of view. In this paper, production function of ICT which is used as input of intermediate good in final good and ICT sectors is incorporated into our model. In this setting, we analyze the role of ICT on balance growth path and show the possibility of general equilibrium solutions for this model. Through the simulation of the equilibrium solutions, we find that when ICT impacts on economy and economic growth increases, it is necessary that increases of efficiency at ICT sector and of accumulation of non-ICT and ICT capitals occur simultaneously.Keywords: endogenous economic growth, ICT, intensity, capital accumulation
Procedia PDF Downloads 4561684 Number of Necessary Parameters for Parametrization of Stabilizing Controllers for two times two RHinf Systems
Authors: Kazuyoshi Mori
Abstract:
In this paper, we consider the number of parameters for the parametrization of stabilizing controllers for RHinf systems with size 2 × 2. Fortunately, any plant of this model can admit doubly coprime factorization. Thus we can use the Youla parameterization to parametrize the stabilizing contollers . However, Youla parameterization does not give itself the minimal number of parameters. This paper shows that the minimal number of parameters is four. As a result, we show that the Youla parametrization naturally gives the parameterization of stabilizing controllers with minimal numbers.Keywords: RHinfo, parameterization, number of parameters, multi-input, multi-output systems
Procedia PDF Downloads 4101683 Geochemical Characteristics of Aromatic Hydrocarbons in the Crude Oils from the Chepaizi Area, Junggar Basin, China
Authors: Luofu Liu, Fei Xiao Jr., Fei Xiao
Abstract:
Through the analysis technology of gas chromatography-mass spectrometry (GC-MS), the composition and distribution characteristics of aromatic hydrocarbons in the Chepaizi area of the Junggar Basin were analyzed in detail. Based on that, the biological input, maturity of crude oils and sedimentary environment of the corresponding source rocks were determined and the origin types of crude oils were divided. The results show that there are three types of crude oils in the study area including Type I, Type II and Type III oils. The crude oils from the 1st member of the Neogene Shawan Formation are the Type I oils; the crude oils from the 2nd member of the Neogene Shawan Formation are the Type II oils; the crude oils from the Cretaceous Qingshuihe and Jurassic Badaowan Formations are the Type III oils. For the Type I oils, they show a single model in the late retention time of the chromatogram of total aromatic hydrocarbons. The content of triaromatic steroid series is high, and the content of dibenzofuran is low. Maturity parameters related to alkyl naphthalene, methylphenanthrene and alkyl dibenzothiophene all indicate low maturity for the Type I oils. For the Type II oils, they have also a single model in the early retention time of the chromatogram of total aromatic hydrocarbons. The content of naphthalene and phenanthrene series is high, and the content of dibenzofuran is medium. The content of polycyclic aromatic hydrocarbon representing the terrestrial organic matter is high. The aromatic maturity parameters indicate high maturity for the Type II oils. For the Type III oils, they have a bi-model in the chromatogram of total aromatic hydrocarbons. The contents of naphthalene series, phenanthrene series, and dibenzofuran series are high. The aromatic maturity parameters indicate medium maturity for the Type III oils. The correlation results of triaromatic steroid series fingerprint show that the Type I and Type III oils have similar source and are both from the Permian Wuerhe source rocks. Because of the strong biodegradation and mixing from other source, the Type I oils are very different from the Type III oils in aromatic hydrocarbon characteristics. The Type II oils have the typical characteristics of terrestrial organic matter input under oxidative environment, and are the coal oil mainly generated by the mature Jurassic coal measure source rocks. However, the overprinting effect from the low maturity Cretaceous source rocks changed the original distribution characteristics of aromatic hydrocarbons to some degree.Keywords: oil source, geochemistry, aromatic hydrocarbons, crude oils, chepaizi area, Junggar Basin
Procedia PDF Downloads 3551682 Interactive Multiple Functions User Interface
Authors: Manjit Singh Sidhu, Waleed Maqableh, Jee Geak Ying
Abstract:
Tangible user interfaces (TUI) that employ markers in the augmented reality (AR) environment has hampered the interactivity between the user and the software application. This is because the user lacks focus on visualizing the contents due to the interaction mechanisms whereby multiple markers may need to be used to perform a particular function. In this research, we have designed a novel TUI user interface where multiple functions could be triggered similar to a natural keyboard thus allowing user to focus more on its digital contents such as 2D/3D, text input, animation and sound. Test results of the user interface with potential users and HCI experts revealed that the multiple functions user interface was new, preferred and appreciated more as opposed to marker based user interface.Keywords: multimedia, augmented reality, engineering, user interface, visualization
Procedia PDF Downloads 450