Search results for: canopy characters classification
614 A Neuropsychological Investigation of the Relationship between Anxiety Levels and Loss of Inhibitory Cognitive Control in Ageing and Dementia
Authors: Nasreen Basoudan, Andrea Tales, Frederic Boy
Abstract:
Non-clinical anxiety may be comprised of state anxiety - temporarily experienced anxiety related to a specific situation, and trait anxiety - a longer lasting response or a general disposition to anxiety. While temporary and occasional anxiety whether as a mood state or personality dimension is normal, nonclinical anxiety may influence many more components of information processing than previously recognized. In ageing and dementia-related research, disease characterization now involves attempts to understand a much wider range of brain function such as loss of inhibitory control, as against the more common focus on memory and cognition. However, in many studies, the tendency has been to include individuals with clinical anxiety disorders while excluding persons with lower levels of state or trait anxiety. Loss of inhibitory cognitive control can lead to behaviors such as aggression, reduced sensitivity to others, sociopathic thoughts and actions. Anxiety has also been linked to inhibitory control, with research suggesting that people with anxiety are less capable of inhibiting their emotions than the average person. This study investigates the relationship between anxiety and loss of inhibitory control in younger and older adults, using a variety of questionnaires and computers-based tests. Based on the premise that irrespective of classification, anxiety is associated with a wide range of physical, affective, and cognitive responses, this study explores evidence indicative of the potential influence anxiety per se on loss of inhibitory control, in order to contribute to discussion and appropriate consideration of anxiety-related factors in methodological practice.Keywords: anxiety, ageing, dementia, inhibitory control
Procedia PDF Downloads 240613 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 256612 Phosphate Bonded Hemp (Cannabis sativa) Fibre Composites
Authors: Stephen O. Amiandamhen, Martina Meinken, Luvuyo Tyhoda
Abstract:
The properties of Hemp (Cannabis sativa) in phosphate bonded composites were investigated in this research. Hemp hurds were collected from the Hemporium institute for research, South Africa. The hurds were air-dried and shredded using a hammer mill. The shives were screened into different particle sizes and were treated separately with 5% solution of acetic anhydride and sodium hydroxide. The binding matrix was prepared using a reactive magnesia, phosphoric acid, class S fly ash and unslaked lime. The treated and untreated hemp fibers were mixed thoroughly in different ratios with the inorganic matrix. Boric acid and excess water were used to retard and control the rate of the reaction and the setting of the binder. The Hemp composite was formed in a rectangular mold and compressed at room temperature at a pressure of 100KPa. After de-molding the composites, they were cured in a conditioning room for 96 h. Physical and mechanical tests were conducted to evaluate the properties of the composites. A central composite design (CCD) was used to determine the best conditions to optimize the performance of the composites. Thereafter, these combinations were applied in the production of the composites, and the properties were evaluated. Scanning electron microscopy (SEM) was used to carry out the advance examination of the behavior of the composites while X-ray diffractometry (XRD) was used to analyze the reaction pathway in the composites. The results revealed that all properties of phosphate bonded Hemp composites exceeded the LD-1 grade classification of particle boards. The proposed product can be used for ceiling, partitioning, wall claddings and underlayment.Keywords: CCD, fly ash, magnesia, phosphate bonded hemp composites, phosphoric acid, unslaked lime
Procedia PDF Downloads 435611 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis
Authors: Catalina Sau Man Ng
Abstract:
Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.Keywords: latent class analysis, prevalence, survey, workplace bullying
Procedia PDF Downloads 330610 Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion
Authors: Rui Liu, Klaus Greve
Abstract:
The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets.Keywords: information fusion, united navigation, dynamic path planning, navigation information visualization
Procedia PDF Downloads 288609 Determining G-γ Degradation Curve in Cohesive Soils by Dilatometer and in situ Seismic Tests
Authors: Ivandic Kreso, Spiranec Miljenko, Kavur Boris, Strelec Stjepan
Abstract:
This article discusses the possibility of using dilatometer tests (DMT) together with in situ seismic tests (MASW) in order to get the shape of G-g degradation curve in cohesive soils (clay, silty clay, silt, clayey silt and sandy silt). MASW test provides the small soil stiffness (Go from vs) at very small strains and DMT provides the stiffness of the soil at ‘work strains’ (MDMT). At different test locations, dilatometer shear stiffness of the soil has been determined by the theory of elasticity. Dilatometer shear stiffness has been compared with the theoretical G-g degradation curve in order to determine the typical range of shear deformation for different types of cohesive soil. The analysis also includes factors that influence the shape of the degradation curve (G-g) and dilatometer modulus (MDMT), such as the overconsolidation ratio (OCR), plasticity index (IP) and the vertical effective stress in the soil (svo'). Parametric study in this article defines the range of shear strain gDMT and GDMT/Go relation depending on the classification of a cohesive soil (clay, silty clay, clayey silt, silt and sandy silt), function of density (loose, medium dense and dense) and the stiffness of the soil (soft, medium hard and hard). The article illustrates the potential of using MASW and DMT to obtain G-g degradation curve in cohesive soils.Keywords: dilatometer testing, MASW testing, shear wave, soil stiffness, stiffness reduction, shear strain
Procedia PDF Downloads 318608 Applied of LAWA Classification for Assessment of the Water by Nutrients Elements: Case Oran Sebkha Basin
Authors: Boualla Nabila
Abstract:
The increasing demand on water, either for the drinkable water supply, or for the agricultural and industrial custom, requires a very thorough hydrochemical study to protect better and manage this resource. Oran is relatively a city with the worst quality of the water. Recently, the growing populations may put stress on natural waters by impairing the quality of the water. Campaign of water sampling of 55 points capturing different levels of the aquifer system was done for chemical analyzes of nutriments elements. The results allowed us to approach the problem of contamination based on the largely uniform nationwide approach LAWA (LänderarbeitsgruppeWasser), based on the EU CIS guidance, has been applied for the identification of pressures and impacts, allowing for easy comparison. Groundwater samples were analyzed, also, for physico-chemical parameters such as pH, sodium, potassium, calcium, magnesium, chloride, sulphate, carbonate and bicarbonate. The analytical results obtained in this hydrochemistry study were interpreted using Durov diagram. Based on these representations, the anomaly of high groundwater salinity observed in Oran Sebkha basin was explained by the high chloride concentration and to the presence of inverse cation exchange reaction. Durov diagram plot revealed that the groundwater has been evolved from Ca-HCO3 recharge water through mixing with the pre-existing groundwater to give mixed water of Mg-SO4 and Mg-Cl types that eventually reached a final stage of evolution represented by a Na-Cl water type.Keywords: contamination, water quality, nutrients elements, approach LAWA, durov diagram
Procedia PDF Downloads 277607 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon
Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn
Abstract:
The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.Keywords: land use and land cover change, change detection, image processing, support vector machines
Procedia PDF Downloads 140606 The Efficiency of AFLP and ISSR Markers in Genetic Diversity Estimation and Gene Pool Classification of Iranian Landrace Bread Wheat (Triticum Aestivum L.) Germplasm
Authors: Reza Talebi
Abstract:
Wheat (Triticum aestivum) is one of the most important food staples in Iran. Understanding genetic variability among the landrace wheat germplasm is important for breeding. Landraces endemic to Iran are a genetic resource that is distinct from other wheat germplasm. In this study, 60 Iranian landrace wheat accessions were characterized AFLP and ISSR markers. Twelve AFLP primer pairs detected 128 polymorphic bands among the sixty genotypes. The mean polymorphism rate based on AFLP data was 31%; however, a wide polymorphism range among primer pairs was observed (22–40%). Polymorphic information content (PIC value) calculated to assess the informativeness of each marker ranged from 0.28 to 0.4, with a mean of 0.37. According to AFLP molecular data, cluster analysis grouped the genotypes in five distinct clusters. .ISSR markers generated 68 bands (average of 6 bands per primer), which 31 were polymorphic (45%) across the 60 wheat genotypes. Polymorphism information content (PIC) value for ISSR markers was calculated in the range of 0.14 to 0.48 with an average of 0.33. Based on data achieved by ISSR-PCR, cluster analysis grouped the genotypes in three distinct clusters. Both AFLP and ISSR markers able to showed that high level of genetic diversity in Iranian landrace wheat accessions has maintained a relatively constant level of genetic diversity during last years.Keywords: wheat, genetic diversity, AFLP, ISSR
Procedia PDF Downloads 452605 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 31604 Isotopes Used in Comparing Indigenous and International Walnut (Juglans regia L.) Varieties
Authors: Raluca Popescu, Diana Costinel, Elisabeta-Irina Geana, Oana-Romina Botoran, Roxana-Elena Ionete, Yazan Falah Jadee 'Alabedallat, Mihai Botu
Abstract:
Walnut production is high in Romania, different varieties being cultivated dependent on high yield, disease resistance or quality of produce. Walnuts have a highly nutritional composition, the kernels containing essential fatty acids, where the unsaturated fraction is higher than in other types of nuts, quinones, tannins, minerals. Walnut consumption can lower the cholesterol, improve the arterial function and reduce inflammation. The purpose of this study is to determine and compare the composition of walnuts of indigenous and international varieties all grown in Romania, in order to identify high-quality indigenous varieties. Oil has been extracted from the nuts of 34 varieties, the fatty acids composition and IV (iodine value) being afterwards measured by NMR. Furthermore, δ13C of the extracted oil had been measured by IRMS to find specific isotopic fingerprints that can be used in authenticating the varieties. Chemometrics had been applied to the data in order to identify similarities and differences between the varieties. The total saturated fatty acids content (SFA) varied between n.d. and 23% molar, oleic acid between 17 and 35%, linoleic acid between 38 and 59%, linolenic acid between 8 and 14%, corresponding to iodine values (IV - total amount of unsaturation) ranging from 100 to 135. The varieties separated in four groups according to the fatty acids composition, each group containing an international variety, making possible the classification of the indigenous ones. At both ends of the unsaturation spectrum, international varieties had been found.Keywords: δ13C-IRMS, fatty acids composition, 1H-NMR, walnut varieties
Procedia PDF Downloads 315603 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder
Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu
Abstract:
Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network
Procedia PDF Downloads 150602 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis
Procedia PDF Downloads 137601 Satellite Derived Snow Cover Status and Trends in the Indus Basin Reservoir
Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar
Abstract:
Snow constitutes an important component of the cryosphere, characterized by high temporal and spatial variability. Because of the contribution of snow melt to water availability, snow is an important focus for research on climate change and adaptation. MODIS satellite data have been used to identify spatial-temporal trends in snow cover in the upper Indus basin. For this research MODIS satellite 8 day composite data of medium resolution (250m) have been analysed from 2001-2005.Pixel based supervised classification have been performed and extent of snow have been calculated of all the images. Results show large variation in snow cover between years while an increasing trend from west to east is observed. Temperature data for the Upper Indus Basin (UIB) have been analysed for seasonal and annual trends over the period 2001-2005 and calibrated with the results acquired by the research. From the analysis it is concluded that there are indications that regional warming is one of the factor that is affecting the hydrology of the upper Indus basin due to accelerated glacial melting during the simulation period, stream flow in the upper Indus basin can be predicted with a high degree of accuracy. This conclusion is also supported by the research of ICIMOD in which there is an observation that the average annual precipitation over a five year period is less than the observed stream flow and supported by positive temperature trends in all seasons.Keywords: indus basin, MODIS, remote sensing, snow cover
Procedia PDF Downloads 388600 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model
Procedia PDF Downloads 157599 Structure Domains Tuning Magnetic Anisotropy and Motivating Novel Electric Behaviors in LaCoO₃ Films
Authors: Dechao Meng, Yongqi Dong, Qiyuan Feng, Zhangzhang Cui, Xiang Hu, Haoliang Huang, Genhao Liang, Huanhua Wang, Hua Zhou, Hawoong Hong, Jinghua Guo, Qingyou Lu, Xiaofang Zhai, Yalin Lu
Abstract:
Great efforts have been taken to reveal the intrinsic origins of emerging ferromagnetism (FM) in strained LaCoO₃ (LCO) films. However, some macro magnetic performances of LCO are still not well understood and even controversial, such as magnetic anisotropy. Determining and understanding magnetic anisotropy might help to find the true causes of FM in turn. Perpendicular magnetic anisotropy (PMA) was the first time to be directly observed in high-quality LCO films with different thickness. The in-plane (IP) and out of plane (OOP) remnant magnetic moment ratio of 30 unit cell (u.c.) films is as large as 20. The easy axis lays in the OOP direction with an IP/OOP coercive field ratio of 10. What's more, the PMA could be simply tuned by changing the thickness. With the thickness increases, the IP/OOP magnetic moment ratio remarkably decrease with magnetic easy axis changing from OOP to IP. Such a huge and tunable PMA performance exhibit strong potentials in fundamental researches or applications. What causes PMA is the first concern. More OOP orbitals occupation may be one of the micro reasons of PMA. A cluster-like magnetic domain pattern was found in 30 u.c. with no obvious color contrasts, similar to that of LaAlO₃/SrTiO₃ films. And the nanosize domains could not be totally switched even at a large OOP magnetic field of 23 T. It indicates strong IP characters or none OOP magnetism of some clusters. The IP magnetic domains might influence the magnetic performance and help to form PMA. Meanwhile some possible nonmagnetic clusters might be the reason why the measured moments of LCO films are smaller than the calculated values 2 μB/Co, one of the biggest confusions in LCO films.What tunes PMA seems much more interesting. Totally different magnetic domain patterns were found in 180 u.c. films with cluster magnetic domains surrounded by < 110 > cross-hatch lines. These lines were regarded as structure domain walls (DWs) determined by 3D reciprocal space mapping (RSM). Two groups of in-plane features with fourfold symmetry were observed near the film diffraction peaks in (002) 3D-RSM. One is along < 110 > directions with a larger intensity, which is well match the lines on the surfaces. The other is much weaker and along < 100 > directions, which is from the normal lattice titling of films deposited on cubic substrates. The < 110 > domain features obtained from (103) and (113) 3D-RSMs exhibit similar evolution of the DWs percentages and magnetic behavior. Structure domains and domain walls are believed to tune PMA performances by transform more IP magnetic moments to OOP. Last but not the least, thick films with lots of structure domains exhibit different electrical transport behaviors. A metal-to-insulator transition (MIT) and an angular dependent negative magnetic resistivity were observed near 150 K, higher than FM transition temperature but similar to that of spin-orbital coupling related 1/4 order diffraction peaks.Keywords: structure domain, magnetic anisotropy, magnetic domain, domain wall, 3D-RSM, strain
Procedia PDF Downloads 153598 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 413597 Heat Waves and Hospital Admissions for Mental Disorders in Hanoi Vietnam
Authors: Phan Minh Trang, Joacim Rocklöv, Kim Bao Giang, Gunnar Kullgren, Maria Nilsson
Abstract:
There are recent studies from high income countries reporting an association between heat waves and hospital admissions for mental health disorders. It is not previously studied if such relations exist in sub-tropical and tropical low- and middle-income countries. In this study from Vietnam, the assumption was that hospital admissions for mental disorders may be triggered, or exacerbated, by heat exposure and heat waves. A database from Hanoi Mental Hospital with mental disorders diagnosed by the International Classification of Diseases 10, spanning over five years, was used to estimate the heatwave-related impacts on admissions for mental disorders. The relationship was analysed by a Negative Binomial regression model accounting for year, month, and days of week. The focus of the study was heat-wave events with periods of three or seven consecutive days above the threshold of 35oC daily maximum temperature. The preliminary study results indicated that heat-waves increased the risks for hospital admission for mental disorders (F00-79) from heat-waves of three and seven days with relative risks (RRs) of 1.16 (1.01–1.33) and 1.42 (1.02–1.99) respectively, when compared with non-heat-wave periods. Heatwave-related admissions for mental disorders increased statistically significantly among men, among residents in rural communities and in elderly. Moreover, cases for organic mental disorders including symptomatic illnesses (F0-9) and mental retardation (F70-79) raised in high risks during heat waves. The findings are novel studying a sub-tropical middle-income city, facing rapid urbanisation and epidemiological and demographic transitions.Keywords: mental disorders, admissions for F0-9 or F70-79, maximum temperature, heat waves
Procedia PDF Downloads 245596 Forest Risk and Vulnerability Assessment: A Case Study from East Bokaro Coal Mining Area in India
Authors: Sujata Upgupta, Prasoon Kumar Singh
Abstract:
The expansion of large scale coal mining into forest areas is a potential hazard for the local biodiversity and wildlife. The objective of this study is to provide a picture of the threat that coal mining poses to the forests of the East Bokaro landscape. The vulnerable forest areas at risk have been assessed and the priority areas for conservation have been presented. The forested areas at risk in the current scenario have been assessed and compared with the past conditions using classification and buffer based overlay approach. Forest vulnerability has been assessed using an analytical framework based on systematic indicators and composite vulnerability index values. The results indicate that more than 4 km2 of forests have been lost from 1973 to 2016. Large patches of forests have been diverted for coal mining projects. Forests in the northern part of the coal field within 1-3 km radius around the coal mines are at immediate risk. The original contiguous forests have been converted into fragmented and degraded forest patches. Most of the collieries are located within or very close to the forests thus threatening the biodiversity and hydrology of the surrounding regions. Based on the vulnerability values estimated, it was concluded that more than 90% of the forested grids in East Bokaro are highly vulnerable to mining. The forests in the sub-districts of Bermo and Chandrapura have been identified as the most vulnerable to coal mining activities. This case study would add to the capacity of the forest managers and mine managers to address the risk and vulnerability of forests at a small landscape level in order to achieve sustainable development.Keywords: forest, coal mining, indicators, vulnerability
Procedia PDF Downloads 390595 Philippine Film Industry and Cultural Policy: A Critical Analysis and Case Study
Authors: Michael Kho Lim
Abstract:
This paper examines the status of the film industry as an industry in the Philippines—where or how it is classified in the Philippine industrial classification system and how this positioning gives the film industry an identity (or not) and affects (film) policy development and impacts the larger national economy. It is important to look at how the national government recognises Philippine cinema officially, as this will have a direct and indirect impact on the industry in terms of its representation, conduct of business, international relations, and most especially its implications on policy development and implementation. Therefore, it is imperative that the ‘identity’ of Philippine cinema be clearly established and defined in the overall industrial landscape. Having a clear understanding of Philippine cinema’s industry status provides a better view of the bigger picture and helps us determine cinema’s position in the national agenda in terms of priority setting, future direction and how the state perceives and thereby values the film industry as an industry. This will then serve as a frame of reference that will anchor the succeeding discussion. Once the Philippine film industry status is identified, the paper will then clarify how cultural policy is defined, understood, and applied in the Philippines in relation to Philippine cinema by reviewing and analyzing existing policy documents and pending bills in the Philippine Congress and Senate. Lastly, the paper delves into the roles that (national) cultural institutions and industry organisations play as primary drivers or support mechanisms and how they become platforms (or not) for the upliftment of the independent film sector and towards the sustainability of the film industry. The paper concludes by arguing that the role of the government and how government officials perceive and treats culture is far more important than cultural policy itself, as these policies emanate from them.Keywords: cultural and creative industries, cultural policy, film industry, Philippine cinema
Procedia PDF Downloads 445594 Dairy Products on the Algerian Market: Proportion of Imitation and Degree of Processing
Authors: Bentayeb-Ait Lounis Saïda, Cheref Zahia, Cherifi Thizi, Ri Kahina Bahmed, Kahina Hallali Yasmine Abdellaoui, Kenza Adli
Abstract:
Algeria is the leading consumer of dairy products in North Africa. This is a fact. However, the nutritional quality of the latter remains unknown. The aim of this study is to characterise the dairy products available on the Algerian market in order to assess whether they constitute a healthy and safe choice. To do this, it collected data on the labelling of 390 dairy products, including cheese, yoghurt, UHT milk and milk drinks, infant formula and dairy creams. We assessed their degree of processing according to the NOVA classification, as well as the proportion of imitation products. The study was carried out between March 2020 and August 2023. The results show that 88% are ultra-processed; 84% for 'cheese', 92% for dairy creams, 92% for 'yoghurt', 100% for infant formula, 92% for margarines and 36% for UHT milk/dairy drinks. As for imitation/analogue dairy products, the study revealed the following proportions: 100% for infant formula, 78% for butter/margarine, 18% for UHT milk/milk-based drinks, 54% for cheese, 2% for camembert and 75% for dairy cream. The harmful effects of consuming ultra-processed products on long-term health are increasingly documented in dozens of publications. The findings of this study sound the alarm about the health risks to which Algerian consumers are exposed. Various scientific, economic and industrial bodies need to be involved in order to safeguard consumer health in both the short and long term. Food awareness and education campaigns should be organised.Keywords: dairy, UPF, NOVA, yoghurt, cheese
Procedia PDF Downloads 39593 Proposal for a Monster Village in Namsan Mountain, Seoul: Significance from a Phenomenological Perspective
Authors: Hyuk-Jin Lee
Abstract:
Korea is a country with thousands of years of history, like its neighbors China and Japan. However, compared to China, which is famous for its ancient fantasy novel "Journey to the West", and Japan, which is famous for its monsters, its “monster culture” is not actively used for tourism. The reason is that the culture closest to the present, from the 17th to 20th centuries, was the Joseon Dynasty, when Neo-Confucianism, which suppressed a monster culture, was the strongest. This trend became stronger after Neo-Confucianism became dogmatic in the mid-17th century. However, Korea, which has a history of Taoism for thousands of years, clearly has many literatures on monsters that can be used as tourism resources. The problem is that these data are buried in texts and are unfamiliar even to Koreans. This study examines the possibility of developing them into attractive tourism resources based on the literary records of the so-called 'monsters densely located in Namsan Mountain, located in the center of Seoul' buried in texts from the 16th to early 17th centuries. In particular, we introduce the surprising consistency in the description of the area north of Namsan Mountain in terms of 'feng shui geography', an oriental philosophy, in a contemporary Korean newspaper. Finally, based on the theoretical foundation through the phenomenological classification table of cultural heritage, we examine phenomenologically how important this ‘visualization of imaginary or text-based entities’ is to changes in the perception of specific cultural resources in a society. In addition, we will deeply analyze related cases, including Japan's ninja culture.Keywords: monster culture, Namsan mountain, neo-confucianism, phenomenology, tourism
Procedia PDF Downloads 35592 Innovation and Employment in Sub-Saharan Africa: Evidence from Uganda Microdata
Authors: Milton Ayoki, Edward Bbaale
Abstract:
This paper analyses the relationship between innovation and employment at firm level with the objective of understanding the contribution of the different innovation strategies in fostering employment growth in Uganda. We use National Innovation Survey (micro-data of 705 Ugandan firms) for the period 2011-2014 and follow closely Harrison et al. (2014) structured approach, and relate employment growth to process innovations and to the growth of sales separately due to innovative and unchanged products. We find positive effects of product innovation on employment at firm level, while process innovation has no discernable impact on employment. Although there is evidence to suggest displacement of labour in some cases where firms only introduce new process, this effect is compensated by growth in employment from new products, which for most firms are introduced simultaneously with new process. Results suggest that source of innovation as well as size of innovating firms or end users of innovation matter for job growth. Innovation that develops from within the firm itself (user) and involving larger firms has greater impact on employment than that developed from outside or coming from within smaller firms. In addition, innovative firms are one and half times more likely to survive in the innovation driven economy environment than those that do not innovate. These results have important implications for policymakers and stakeholders in innovation ecosystem. Supporting policies need to be correctly tailored since the impacts depend on the innovation strategy (type) and characteristics and sector of the innovative firms (small, large, industry, etc.). Policies to spur investment, particularly in innovative sectors and firms with high growth potential would have long lasting effects on job creation. JEL Classification: D24, J0, J20, L20, O30.Keywords: employment, process innovation, product innovation, Sub-Saharan Africa
Procedia PDF Downloads 173591 Artificial Intelligence Based Abnormality Detection System and Real Valuᵀᴹ Product Design
Authors: Junbeom Lee, Jaehyuck Cho, Wookyeong Jeong, Jonghan Won, Jungmin Hwang, Youngseok Song, Taikyeong Jeong
Abstract:
This paper investigates and analyzes meta-learning technologies that use multiple-cameras to monitor and check abnormal behavior in people in real-time in the area of healthcare fields. Advances in artificial intelligence and computer vision technologies have confirmed that cameras can be useful for individual health monitoring and abnormal behavior detection. Through this, it is possible to establish a system that can respond early by automatically detecting abnormal behavior of the elderly, such as patients and the elderly. In this paper, we use a technique called meta-learning to analyze image data collected from cameras and develop a commercial product to determine abnormal behavior. Meta-learning applies machine learning algorithms to help systems learn and adapt quickly to new real data. Through this, the accuracy and reliability of the abnormal behavior discrimination system can be improved. In addition, this study proposes a meta-learning-based abnormal behavior detection system that includes steps such as data collection and preprocessing, feature extraction and selection, and classification model development. Various healthcare scenarios and experiments analyze the performance of the proposed system and demonstrate excellence compared to other existing methods. Through this study, we present the possibility that camera-based meta-learning technology can be useful for monitoring and testing abnormal behavior in the healthcare area.Keywords: artificial intelligence, abnormal behavior, early detection, health monitoring
Procedia PDF Downloads 89590 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 138589 From Colonial Outpost to Cultural India: Folk Epics of India
Authors: Jyoti Brahma
Abstract:
Folk epics of India are found in various Indian languages. The study of folk epics and its importance in folkloristic study in India came into prominence only during the nineteenth century. The British administrators and missionaries collected and documented folk epics from various parts of the country. The paper is an attempt to investigate how colonial outpost appears to penetrate the interiors of Indian land and society and triggered off the Indian Renaissance. It takes into account the compositions of the epics of India and the attention it received during the nineteenth century, which in turn gave, rise to the national consciousness shaping the culture of India. Composed as oral traditions these folk epics are now seen as repositories of historical consciousness whereas in earlier times societies without literacy were said to be without history. So, there is an urgent need to re-examine the British impact on Indian literary traditions. The Bhakti poets through their nuanced responses in their efforts to change the behavior of Indian society gives us the perfect example of deferment in the clear cut distinction between the folk and the classical in the context of India. It evades a pure categorization and classification of the classical and constitutes part of the folk traditions of the cultural heritage of India. Therefore, the ethical question of what is ontologically known as ordinary discourse in the case of the “folk” forms metaphors and folk language gains importance once more. The paper also thus seeks simultaneously to outline the significant factors responsible for shaping the destiny of folklore in South India particularly the four political states of the Indian Union: Andhra Pradesh, Karnataka, Kerala and Tamil Nadu, what could be termed as South Indian “cultural zones”.Keywords: colonial, folk, folklore, tradition
Procedia PDF Downloads 312588 Efficient Human Motion Detection Feature Set by Using Local Phase Quantization Method
Authors: Arwa Alzughaibi
Abstract:
Human Motion detection is a challenging task due to a number of factors including variable appearance, posture and a wide range of illumination conditions and background. So, the first need of such a model is a reliable feature set that can discriminate between a human and a non-human form with a fair amount of confidence even under difficult conditions. By having richer representations, the classification task becomes easier and improved results can be achieved. The Aim of this paper is to investigate the reliable and accurate human motion detection models that are able to detect the human motions accurately under varying illumination levels and backgrounds. Different sets of features are tried and tested including Histogram of Oriented Gradients (HOG), Deformable Parts Model (DPM), Local Decorrelated Channel Feature (LDCF) and Aggregate Channel Feature (ACF). However, we propose an efficient and reliable human motion detection approach by combining Histogram of oriented gradients (HOG) and local phase quantization (LPQ) as the feature set, and implementing search pruning algorithm based on optical flow to reduce the number of false positive. Experimental results show the effectiveness of combining local phase quantization descriptor and the histogram of gradient to perform perfectly well for a large range of illumination conditions and backgrounds than the state-of-the-art human detectors. Areaunder th ROC Curve (AUC) of the proposed method achieved 0.781 for UCF dataset and 0.826 for CDW dataset which indicates that it performs comparably better than HOG, DPM, LDCF and ACF methods.Keywords: human motion detection, histograms of oriented gradient, local phase quantization, local phase quantization
Procedia PDF Downloads 258587 Internet of Things Networks: Denial of Service Detection in Constrained Application Protocol Using Machine Learning Algorithm
Authors: Adamu Abdullahi, On Francisca, Saidu Isah Rambo, G. N. Obunadike, D. T. Chinyio
Abstract:
The paper discusses the potential threat of Denial of Service (DoS) attacks in the Internet of Things (IoT) networks on constrained application protocols (CoAP). As billions of IoT devices are expected to be connected to the internet in the coming years, the security of these devices is vulnerable to attacks, disrupting their functioning. This research aims to tackle this issue by applying mixed methods of qualitative and quantitative for feature selection, extraction, and cluster algorithms to detect DoS attacks in the Constrained Application Protocol (CoAP) using the Machine Learning Algorithm (MLA). The main objective of the research is to enhance the security scheme for CoAP in the IoT environment by analyzing the nature of DoS attacks and identifying a new set of features for detecting them in the IoT network environment. The aim is to demonstrate the effectiveness of the MLA in detecting DoS attacks and compare it with conventional intrusion detection systems for securing the CoAP in the IoT environment. Findings: The research identifies the appropriate node to detect DoS attacks in the IoT network environment and demonstrates how to detect the attacks through the MLA. The accuracy detection in both classification and network simulation environments shows that the k-means algorithm scored the highest percentage in the training and testing of the evaluation. The network simulation platform also achieved the highest percentage of 99.93% in overall accuracy. This work reviews conventional intrusion detection systems for securing the CoAP in the IoT environment. The DoS security issues associated with the CoAP are discussed.Keywords: algorithm, CoAP, DoS, IoT, machine learning
Procedia PDF Downloads 81586 Analysis of the Unmanned Aerial Vehicles’ Incidents and Accidents: The Role of Human Factors
Authors: Jacob J. Shila, Xiaoyu O. Wu
Abstract:
As the applications of unmanned aerial vehicles (UAV) continue to increase across the world, it is critical to understand the factors that contribute to incidents and accidents associated with these systems. Given the variety of daily applications that could utilize the operations of the UAV (e.g., medical, security operations, construction activities, landscape activities), the main discussion has been how to safely incorporate the UAV into the national airspace system. The types of UAV incidents being reported range from near sightings by other pilots to actual collisions with aircraft or UAV. These incidents have the potential to impact the rest of aviation operations in a variety of ways, including human lives, liability costs, and delay costs. One of the largest causes of these incidents cited is the human factor; other causes cited include maintenance, aircraft, and others. This work investigates the key human factors associated with UAV incidents. To that end, the data related to UAV incidents that have occurred in the United States is both reviewed and analyzed to identify key human factors related to UAV incidents. The data utilized in this work is gathered from the Federal Aviation Administration (FAA) drone database. This study adopts the human factor analysis and classification system (HFACS) to identify key human factors that have contributed to some of the UAV failures to date. The uniqueness of this work is the incorporation of UAV incident data from a variety of applications and not just military data. In addition, identifying the specific human factors is crucial towards developing safety operational models and human factor guidelines for the UAV. The findings of these common human factors are also compared to similar studies in other countries to determine whether these factors are common internationally.Keywords: human factors, incidents and accidents, safety, UAS, UAV
Procedia PDF Downloads 245585 Image Recognition Performance Benchmarking for Edge Computing Using Small Visual Processing Unit
Authors: Kasidis Chomrat, Nopasit Chakpitak, Anukul Tamprasirt, Annop Thananchana
Abstract:
Internet of Things devices or IoT and Edge Computing has become one of the biggest things happening in innovations and one of the most discussed of the potential to improve and disrupt traditional business and industry alike. With rises of new hang cliff challenges like COVID-19 pandemic that posed a danger to workforce and business process of the system. Along with drastically changing landscape in business that left ruined aftermath of global COVID-19 pandemic, looming with the threat of global energy crisis, global warming, more heating global politic that posed a threat to become new Cold War. How emerging technology like edge computing and usage of specialized design visual processing units will be great opportunities for business. The literature reviewed on how the internet of things and disruptive wave will affect business, which explains is how all these new events is an effect on the current business and how would the business need to be adapting to change in the market and world, and example test benchmarking for consumer marketed of newer devices like the internet of things devices equipped with new edge computing devices will be increase efficiency and reducing posing a risk from a current and looming crisis. Throughout the whole paper, we will explain the technologies that lead the present technologies and the current situation why these technologies will be innovations that change the traditional practice through brief introductions to the technologies such as cloud computing, edge computing, Internet of Things and how it will be leading into future.Keywords: internet of things, edge computing, machine learning, pattern recognition, image classification
Procedia PDF Downloads 156