Search results for: digital image correlation technology
13532 A Deluge of Disaster, Destruction, Death and Deception: Negative News and Empathy Fatigue in the Digital Age
Authors: B. N. Emenyeonu
Abstract:
Initially identified as sensationalism in the eras of yellow journalism and tabloidization, the inclusion of news which shocks or provokes strong emotional responses among readers, viewers, and browsers has not only remained a persistent feature of journalism but has also seemingly escalated in the current climate of digital and social media. Whether in the relentless revelation of scandals in high places, profiles on people displaced by sporadic wars or natural disasters, gruesome accounts of trucks plowing into pedestrians in a city centre, or the coverage of mourners paying tributes to victims of a mass shooting, mainstream, and digital media are often awash with tragedy, tears, and trauma. While it may aim at inspiring sympathy, outrage, or even remedial reactions, it would appear that the deluge of grief and misery in the news merely generates in the audience a feeling that borders on hearing or seeing too much to care or act. This feeling also appears to be accentuated by the dizzying diffusion of social media news and views, most of whose authenticity is not easily verifiable. Through a survey of 400 regular consumers of news and an in-depth interview of 10 news managers in selected media organizations across the Middle East, this study therefore investigates public attitude to the profusion of bad news in mainstream and digital media. Among other targets, it examines whether the profusion of bad news generates empathy fatigue among the audience and, if so, whether there is any association between biographic variables (profession, age, and gender) and an inclination to empathy fatigue. It also seeks to identify which categories of bad news and media are most likely to drag the audience into indifference. In conclusion, the study discusses the implications of the findings for mass-mediated advocacies such as campaigns against conflicts, corruption, nuclear threats, terrorism, gun violence, sexual crimes, and human trafficking, among other threats to humanity.Keywords: digital media, empathy fatigue, media campaigns, news selection
Procedia PDF Downloads 5813531 HPTLC Metabolite Fingerprinting of Artocarpus champeden Stembark from Several Different Locations in Indonesia and Correlation with Antimalarial Activity
Authors: Imam Taufik, Hilkatul Ilmi, Puryani, Mochammad Yuwono, Aty Widyawaruyanti
Abstract:
Artocarpus champeden Spreng stembark (Moraceae) in Indonesia well known as ‘cempedak’ had been traditionally used for malarial remedies. The difference of growth locations could cause the difference of metabolite profiling. As a consequence, there were difference antimalarial activities in spite of the same plants. The aim of this research was to obtain the profile of metabolites that contained in A. champeden stembark from different locations in Indonesia for authentication and quality control purpose of this extract. The profiling had been performed by HPTLC-Densitometry technique and antimalarial activity had been also determined by HRP2-ELISA technique. The correlation between metabolite fingerprinting and antimalarial activity had been analyzed by Principle Component Analysis, Hierarchical Clustering Analysis and Partial Least Square. As a result, there is correlation between the difference metabolite fingerprinting and antimalarial activity from several different growth locations.Keywords: antimalarial, artocarpus champeden spreng, metabolite fingerprinting, multivariate analysis
Procedia PDF Downloads 31113530 Mean Shift-Based Preprocessing Methodology for Improved 3D Buildings Reconstruction
Authors: Nikolaos Vassilas, Theocharis Tsenoglou, Djamchid Ghazanfarpour
Abstract:
In this work we explore the capability of the mean shift algorithm as a powerful preprocessing tool for improving the quality of spatial data, acquired from airborne scanners, from densely built urban areas. On one hand, high resolution image data corrupted by noise caused by lossy compression techniques are appropriately smoothed while at the same time preserving the optical edges and, on the other, low resolution LiDAR data in the form of normalized Digital Surface Map (nDSM) is upsampled through the joint mean shift algorithm. Experiments on both the edge-preserving smoothing and upsampling capabilities using synthetic RGB-z data show that the mean shift algorithm is superior to bilateral filtering as well as to other classical smoothing and upsampling algorithms. Application of the proposed methodology for 3D reconstruction of buildings of a pilot region of Athens, Greece results in a significant visual improvement of the 3D building block model.Keywords: 3D buildings reconstruction, data fusion, data upsampling, mean shift
Procedia PDF Downloads 31513529 The Algerian Experience in Developing Higher Education in the Country in Light of Modern Technology: Challenges and Prospects
Authors: Mohammed Messaoudi
Abstract:
The higher education sector in Algeria has witnessed in recent years a remarkable transformation, as it witnessed the integration of institutions within the modern technological environment and harnessing all appropriate mechanisms to raise the level of education and the level of training. Observers and those interested that it is necessary for the Algerian university to enter this field, especially with the efforts that seek to employ modern technology in the sector and encourage investment in this field, in addition to the state’s keenness to move towards building a path to benefit from modern technology, and to encourage energies in light of a reality that carries many Aspirations and challenges by achieving openness to the new digital environment and keeping pace with the ranks of international universities. Higher education is one of the engines of development for societies, as it is a vital field for the transfer of knowledge and scientific expertise, and the university is at the top of the comprehensive educational system for various disciplines in light of the achievement of a multi-dimensional educational system, and amid the integration of three basic axes that establish the sound educational process (teaching, research, relevant outputs efficiency), and according to a clear strategy that monitors the advancement of academic work, and works on developing its future directions to achieve development in this field. The Algerian University is considered one of the service institutions that seeks to find the optimal mechanisms to keep pace with the changes of the times, as it has become necessary for the university to enter the technological space and thus ensure the quality of education in it and achieve the required empowerment by dedicating a structure that matches the requirements of the challenges on which the sector is based, amid unremitting efforts to develop the capabilities. He sought to harness the mechanisms of communication and information technology and achieve transformation at the level of the higher education sector with what is called higher education technology. The conceptual framework of information and communication technology at the level of higher education institutions in Algeria is determined through the factors of organization, factors of higher education institutions, characteristics of the professor, characteristics of students, the outcomes of the educational process, and there is a relentless pursuit to achieve a positive interaction between these axes as they are basic components on which the success and achievement of higher education are based for his goals.Keywords: Information and communication technology, Algerian university, scientific and cognitive development, challenges
Procedia PDF Downloads 8513528 Enhancement of Visual Comfort Using Parametric Double Skin Façade
Authors: Ahmed A. Khamis, Sherif A. Ibrahim, Mahmoud El Khatieb, Mohamed A. Barakat
Abstract:
Parametric design is an icon of the modern architectural that facilitate taking complex design decisions counting on altering various design parameters. Double skin facades are one of the parametric applications for using parametric designs. This paper opts to enhance different daylight parameters of a selected case study office building in Cairo using parametric double skin facade. First, the design and optimization process executed utilizing Grasshopper parametric design software which is a plugin in rhino. The daylighting performance of the base case building model was compared with the one used the double façade showing an enhancement in daylighting performance indicators like glare and task illuminance in the modified model, execution drawings are made for the optimized design to be executed through Revit, followed by computerized digital fabrication stages of the designed model with various scales to reach the final design decisions using Simplify 3D for mock-up digital fabricationKeywords: parametric design, double skin facades, digital fabrication, grasshopper, simplify 3D
Procedia PDF Downloads 11813527 Toward Subtle Change Detection and Quantification in Magnetic Resonance Neuroimaging
Authors: Mohammad Esmaeilpour
Abstract:
One of the important open problems in the field of medical image processing is detection and quantification of small changes. In this poster, we try to investigate that, how the algebraic decomposition techniques can be used for semiautomatically detecting and quantifying subtle changes in Magnetic Resonance (MR) neuroimaging volumes. We mostly focus on the low-rank values of the matrices achieved from decomposing MR image pairs during a period of time. Besides, a skillful neuroradiologist will help the algorithm to distinguish between noises and small changes.Keywords: magnetic resonance neuroimaging, subtle change detection and quantification, algebraic decomposition, basis functions
Procedia PDF Downloads 47413526 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 9913525 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks
Procedia PDF Downloads 10313524 Scar Removal Stretegy for Fingerprint Using Diffusion
Authors: Mohammad A. U. Khan, Tariq M. Khan, Yinan Kong
Abstract:
Fingerprint image enhancement is one of the most important step in an automatic fingerprint identification recognition (AFIS) system which directly affects the overall efficiency of AFIS. The conventional fingerprint enhancement like Gabor and Anisotropic filters do fill the gaps in ridge lines but they fail to tackle scar lines. To deal with this problem we are proposing a method for enhancing the ridges and valleys with scar so that true minutia points can be extracted with accuracy. Our results have shown an improved performance in terms of enhancement.Keywords: fingerprint image enhancement, removing noise, coherence, enhanced diffusion
Procedia PDF Downloads 51613523 Small Text Extraction from Documents and Chart Images
Authors: Rominkumar Busa, Shahira K. C., Lijiya A.
Abstract:
Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.Keywords: small text extraction, OCR, scene text recognition, CRNN
Procedia PDF Downloads 12513522 Intellectual Property Rights (IPR) in the Relations among Nations: Towards a Renewed Hegemony or Not
Authors: Raju K. Thadikkaran
Abstract:
Introduction: The IPR have come to the centre stage of development discourse today for a variety of reasons: It ranges from the arbitrariness in the enforcement, overlapping and mismatch with various international agreements and conventions, divergence in the definition, nature and content and the duration as well as severe adverse consequences to technologically weak developing countries. In turn, the IPR have acquired prominence in the foreign policy making as well as in the relations among nations. Quite naturally, there is ample scope for an examination of the correlation between Technology, IPR and International Relations in the contemporary world. Nature and Scope: A cursory examination of the realm of IPR and its protection shall reveals the acute divergence that exists in the perspectives, on all matters related to the very definition, nature, content, scope and duration. The proponents of stronger protection, mostly technologically advanced countries, insist on a stringent IP Regime whereas technologically weak developing countries seem to advocate for flexibilities. From the perspective of developing countries like India, one of the most crucial concerns is related to the patenting of life forms and the protection of TK and BD. There have been several instances of Bio-piracy and Bio-prospecting of the resources related to BD and TK from the Bio-rich Global South. It is widely argued that many provisions in the TRIPS are capable of offsetting the welcome provisions in the CBD such as the Access and Benefit Sharing and Prior Informed Consent. The point that is being argued out is as to how the mismatch between the provisions in the TRIPS Agreement and the CBD could be addressed in a healthy manner so that the essential minimum legitimate interests of all stakeholders could be secured thereby introducing a new direction to the international relations. The findings of this study reveal that the challenges roused by the TRIPS Regime over-weigh the opportunities. The mismatch in the provisions in this regard has generated various crucial issues such as Bio-piracy and Bio-prospecting. However, there is ample scope for managing and protecting IP through institutional innovation, legislative, executive and administrative initiative at the global, national and regional levels. The Indian experience is quite reflective of the same and efforts are being made through the new national IPR policy. This paper, employing Historical Analytical Method, has Three Sections. The First Section shall trace the correlation between the Technology, IPR and international relations. The Second Section shall review the issues and potential concerns in the protection and management of IP related to the BD and TK in the developing countries in the wake of the TRIPS and the CBD. The Final Section shall analyze the Indian Experience in this regard and the experience of the bio-rich Kerala in particular.Keywords: IPR, technology and international relations, bio-diversity, traditional knowledge
Procedia PDF Downloads 37513521 Examination of the Satisfaction Levels of Pre-Service Teachers Concerning E-Learning Process in Terms of Different Variables
Authors: Agah Tugrul Korucu
Abstract:
Significant changes have taken place for the better in the bulk of information and in the use of technology available in the field of education induced by technological changes in the 21st century. It is mainly the job of the teachers and pre-service teachers to integrate information and communication technologies into education by means of conveying the use of technology to individuals. While the pre-service teachers are conducting lessons by using technology, the methods they have developed are important factors for the requirements of the lesson and for the satisfaction levels of the students. The study of this study is to examine the satisfaction levels of pre-service teachers as regards e-learning in a technological environment in which there are lesson activities conducted through an online learning environment in terms of various variables. The study group of the research is composed of 156 pre-service teachers that were students in the departments of Computer and Teaching Technologies, Art Teaching and Pre-school Teaching in the academic year of 2014 - 2015. The qualitative research method was adopted for this study; the scanning model was employed in collecting the data. “The Satisfaction Scale regarding the E-learning Process”, developed by Gülbahar, and the personal information form, which was developed by the researcher, were used as means of collecting the data. Cronbach α reliability coefficient, which is the internal consistency coefficient of the scale, is 0.91. SPSS computerized statistical package program and the techniques of medium, standard deviation, percentage, correlation, t-test and variance analysis were used in the analysis of the data.Keywords: online learning environment, integration of information technologies, e-learning, e-learning satisfaction, pre-service teachers
Procedia PDF Downloads 35213520 Technology Assessment: Exploring Possibilities to Encounter Problems Faced by Intellectual Property through Blockchain
Authors: M. Ismail, E. Grifell-Tatjé, A. Paz
Abstract:
A significant discussion on the topic of blockchain as a solution to the issues of intellectual property highlights the relevance that this topic holds. Some experts label this technology as destructive since it holds immense potential to change course of traditional practices. The extent and areas to which this technology can be of use are still being researched. This paper provides an in-depth review on the intellectual property and blockchain technology. Further it explores what makes blockchain suitable for intellectual property, the practical solutions available and the support different governments are offering. This paper further studies the framework of universities in context of its outputs and how can they be streamlined using blockchain technology. The paper concludes by discussing some limitations and future research question.Keywords: blockchain, decentralization, open innovation, intellectual property, patents, university-industry relationship
Procedia PDF Downloads 18513519 Global Indicators of Successful Remote Monitoring Adoption Applying Diffusion of Innovation Theory
Authors: Danika Tynes
Abstract:
Innovations in technology have implications for sustainable development in health and wellness. Remote monitoring is one innovation for which the evidence-base has grown to support its viability as a quality healthcare delivery adjunct. This research reviews global data on telehealth adoption, in particular, remote monitoring, and the conditions under which its success becomes more likely. System-level indicators were selected to represent four constructs of DoI theory (relative advantage, compatibility, complexity, and observability) and assessed against 5 types of Telehealth (Teleradiology, Teledermatology, Telepathology, Telepsychology, and Remote Monitoring) using ordinal logistic regression. Analyses include data from 84 countries, as extracted from the World Health Organization, World Bank, ICT (Information Communications Technology) Index, and HDI (Human Development Index) datasets. Analyses supported relative advantage and compatibility as the strongest influencers of remote monitoring adoption. Findings from this research may help focus on the allocation of resources, as a sustainability concern, through consideration of systems-level factors that may influence the success of remote monitoring adoption.Keywords: remote monitoring, diffusion of innovation, telehealth, digital health
Procedia PDF Downloads 13313518 Review and Evaluation of Trending Canonical Correlation Analyses-Based Brain Computer Interface Methods
Authors: Bayar Shahab
Abstract:
The fast development of technology that has advanced neuroscience and human interaction with computers has enabled solutions to various problems, and issues of this new era have been found and are being found like no other time in history. Brain-computer interface so-called BCI has opened the door to several new research areas and have been able to provide solutions to critical and important issues such as supporting a paralyzed patient to interact with the outside world, controlling a robot arm, playing games in VR with the brain, driving a wheelchair or even a car and neurotechnology enabled the rehabilitation of the lost memory, etc. This review work presents state-of-the-art methods and improvements of canonical correlation analyses (CCA), which is an SSVEP-based BCI method. These are the methods used to extract EEG signal features or, to be said in a different way, the features of interest that we are looking for in the EEG analyses. Each of the methods from oldest to newest has been discussed while comparing their advantages and disadvantages. This would create a great context and help researchers to understand the most state-of-the-art methods available in this field with their pros and cons, along with their mathematical representations and usage. This work makes a vital contribution to the existing field of study. It differs from other similar recently published works by providing the following: (1) stating most of the prominent methods used in this field in a hierarchical way (2) explaining pros and cons of each method and their performance (3) presenting the gaps that exist at the end of each method that can open the understanding and doors to new research and/or improvements.Keywords: BCI, CCA, SSVEP, EEG
Procedia PDF Downloads 14513517 Process Flows and Risk Analysis for the Global E-SMC
Authors: Taeho Park, Ming Zhou, Sangryul Shim
Abstract:
With the emergence of the global economy, today’s business environment is getting more competitive than ever in the past. And many supply chain (SC) strategies and operations have significantly been altered over the past decade to overcome more complexities and risks imposed onto the global business. First, offshoring and outsourcing are more adopted as operational strategies. Manufacturing continues to move to better locations for enhancing competitiveness. Second, international operations are a challenge to a company’s SC system. Third, the products traded in the SC system are not just physical goods, but also digital goods (e.g., software, e-books, music, video materials). There are three main flows involved in fulfilling the activities in the SC system: physical flow, information flow, and financial flow. An advance of the Internet and electronic communication technologies has enabled companies to perform the flows of SC activities in electronic formats, resulting in the advent of an electronic supply chain management (e-SCM) system. A SC system for digital goods is somewhat different from the supply chain system for physical goods. However, it involves many similar or identical SC activities and flows. For example, like the production of physical goods, many third parties are also involved in producing digital goods for the production of components and even final products. This research aims at identifying process flows of both physical and digital goods in a SC system, and then investigating all risk elements involved in the physical, information, and financial flows during the fulfilment of SC activities. There are many risks inherent in the e-SCM system. Some risks may have severe impact on a company’s business, and some occur frequently but are not detrimental enough to jeopardize a company. Thus, companies should assess the impact and frequency of those risks, and then prioritize them in terms of their severity, frequency, budget, and time in order to be carefully maintained. We found risks involved in the global trading of physical and digital goods in four different categories: environmental risk, strategic risk, technological risk, and operational risk. And then the significance of those risks was investigated through a survey. The survey asked companies about the frequency and severity of the identified risks. They were also asked whether they had faced those risks in the past. Since the characteristics and supply chain flows of digital goods are varying industry by industry and country by country, it is more meaningful and useful to analyze risks by industry and country. To this end, more data in each industry sector and country should be collected, which could be accomplished in the future research.Keywords: digital goods, e-SCM, risk analysis, supply chain flows
Procedia PDF Downloads 42213516 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models
Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel
Abstract:
In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids
Procedia PDF Downloads 37813515 A Multi Cordic Architecture on FPGA Platform
Authors: Ahmed Madian, Muaz Aljarhi
Abstract:
Coordinate Rotation Digital Computer (CORDIC) is a unique digital computing unit intended for the computation of mathematical operations and functions. This paper presents a multi-CORDIC processor that integrates different CORDIC architectures on a single FPGA chip and allows the user to select the CORDIC architecture to proceed with based on what he wants to calculate and his/her needs. Synthesis show that radix 2 CORDIC has the lowest clock delay, radix 8 CORDIC has the highest LUT usage and lowest register usage while Hybrid Radix 4 CORDIC had the highest clock delay.Keywords: multi, CORDIC, FPGA, processor
Procedia PDF Downloads 47013514 A Cross-Sectional Study on the Correlation between Body Mass Index and Self-Esteem among Children Ages 9-12 Years Old in a Public Elementary School in Makati, Philippines
Authors: Jerickson Abbie Flores, Jana Fragante, Jan Paolo Dipasupil, Jan Jorge Francisco
Abstract:
Malnutrition is one of the rapidly growing health problems affecting the world at present. Children affected are not only at risk for significant health problems, but are also faced with psychological and social consequences, including low self-esteem. School-age children are specifically vulnerable to develop poor self-esteem especially when their peers find them physically unattractive. Thus, malnutrition, whether obesity or undernourishment, contributes a significant role to a developing child’s health and behavior. This research aims to determine if there is a significant difference on the level of self-esteem among Filipino children ages 9-12 years old with abnormal body mass index (BMI) and those children with desirable BMI. Using a cross-sectional study design, the correlation between body mass index (BMI) and self-esteem was observed among children ages 9-12 years old. Participants took the Hare self esteem questionnaire, which is specifically designed to measure self-esteem in school age children. The lowest possible score is 15 and the highest possible score is 45. A total of 1140 students with ages 9-12 years old from Cembo Elementary School (public school) participated in the study. Among the participants, 239 out of the 1140 have desirable body mass index, 878 are underweight, and 23 are overweight. Using the test questionnaire, the computed mean scores were 36.599, 36.045 and 36.583 for normal, underweight and overweight categories respectively. Using Pearson’s Correlation Test and Spearman’s Correlation Coefficient Test, the study showed positive correlation (p value of 0.047 and 0.004 respectively) between BMI and Self-esteem scores which indicates that the higher the BMI, the higher the self-esteem of the participants.Keywords: body mass index, malnutrition, school-age children, self-esteem
Procedia PDF Downloads 28013513 Household Size and Poverty Rate: Evidence from Nepal
Authors: Basan Shrestha
Abstract:
The relationship between the household size and the poverty is not well understood. Malthus followers advocate that the increasing population add pressure to the dwindling resource base due to increasing demand that would lead to poverty. Others claim that bigger households are richer due to availability of household labour for income generation activities. Facts from Nepal were analyzed to examine the relationship between the household size and poverty rate. The analysis of data from 3,968 Village Development Committee (VDC)/ municipality (MP) located in 75 districts of all five development regions revealed that the average household size had moderate positive correlation with the poverty rate (Karl Pearson's correlation coefficient=0.44). In a regression analysis, the household size determined 20% of the variation in the poverty rate. Higher positive correlation was observed in eastern Nepal (Karl Pearson's correlation coefficient=0.66). The regression analysis showed that the household size determined 43% of the variation in the poverty rate in east. The relation was poor in far-west. It could be because higher incidence of poverty was there irrespective of household size. Overall, the facts revealed that the bigger households were relatively poorer. With the increasing level of awareness and interventions for family planning, it is anticipated that the household size will decrease leading to the decreased poverty rate. In addition, the government needs to devise a mechanism to create employment opportunities for the household labour force to reduce poverty.Keywords: household size, poverty rate, nepal, regional development
Procedia PDF Downloads 36113512 Single Chip Controller Design for Piezoelectric Actuators with Mixed Signal FPGA
Authors: Han-Bin Park, Taesam Kang, SunKi Hong, Jeong Hoi Gu
Abstract:
The piezoelectric material is being used widely for actuators due to its large power density with simple structure. It can generate a larger force than the conventional actuators with the same size. Furthermore, the response time of piezoelectric actuators is very short, and thus, it can be used for very fast system applications with compact size. To control the piezoelectric actuator, we need analog signal conditioning circuits as well as digital microcontrollers. Conventional microcontrollers are not equipped with analog parts and thus the control system becomes bulky compared with the small size of the piezoelectric devices. To overcome these weaknesses, we are developing one-chip micro controller that can handle analog and digital signals simultaneously using mixed signal FPGA technology. We used the SmartFusion™ FPGA device that integrates ARM®Cortex-M3, analog interface and FPGA fabric in a single chip and offering full customization. It gives more flexibility than traditional fixed-function microcontrollers with the excessive cost of soft processor cores on traditional FPGAs. In this paper we introduce the design of single chip controller using mixed signal FPGA, SmartFusion™[1] device. To demonstrate its performance, we implemented a PI controller for power driving circuit and a 5th order H-infinity controller for the system with piezoelectric actuator in the FPGA fabric. We also demonstrated the regulation of a power output and the operation speed of a 5th order H-infinity controller.Keywords: mixed signal FPGA, PI control, piezoelectric actuator, SmartFusion™
Procedia PDF Downloads 52013511 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning
Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho
Abstract:
Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning
Procedia PDF Downloads 9613510 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 18613509 The Use of Modern Technology to Enhance English Language Teaching and Learning: An Analysis
Authors: Fazilet Alachaher (Benzerdjeb)
Abstract:
From the chalkboard to the abacus and beyond, technology has always played an important role in education. Educational technology refers to any teaching tool that helps supports learning, and given the rapid advancements in Information Technology and multimedia applications, the potential to support the teaching of foreign languages in our universities is ever greater. In language teaching and learning, we have a lot of to choose from the world of technology: TV, CDs, DVDs, Computers, the Internet, Email, and Blogs. The use of modern technologies can enrich the experience of learning a foreign language because they provide features that are not present in traditional technology. They can offer a wide range of multimedia resources, opportunities for intensive one-to-one learning in language labs and resources for authentic materials, which can be motivating to both students and teachers. The advent of Information and Communication Technology (ICT) and online interaction can also open up new range of self-access and distance learning opportunities The two last decades have witnessed a revolution due to the onset of technology, and has changed the dynamics of various industries, and has also influenced the way people live and work in society. That is why using the multimedia to create a certain context to teach English has its unique advantages. This paper tries then to analyse the necessity of multimedia technology to language teaching and brings out the problems faced by using these technologies. It also aims at making English teachers aware of the strategies to use it in an effective manner.Keywords: strategies English teaching, multimedia technology, advantages, disadvantages, English learning
Procedia PDF Downloads 46313508 System Identification and Controller Design for a DC Electrical Motor
Authors: Armel Asongu Nkembi, Ahmad Fawad
Abstract:
The aim of this paper is to determine in a concise way the transfer function that characterizes a DC electrical motor with a helix. In practice it can be obtained by applying a particular input to the system and then, based on the observation of its output, determine an approximation to the transfer function of the system. In our case, we use a step input and find the transfer function parameters that give the simulated first-order time response. The simulation of the system is done using MATLAB/Simulink. In order to determine the parameters, we assume a first order system and use the Broida approximation to determine the parameters and then its Mean Square Error (MSE). Furthermore, we design a PID controller for the control process first in the continuous time domain and tune it using the Ziegler-Nichols open loop process. We then digitize the controller to obtain a digital controller since most systems are implemented using computers, which are digital in nature.Keywords: transfer function, step input, MATLAB, Simulink, DC electrical motor, PID controller, open-loop process, mean square process, digital controller, Ziegler-Nichols
Procedia PDF Downloads 5513507 Spatial Correlation of Channel State Information in Real Long Range Measurement
Authors: Ahmed Abdelghany, Bernard Uguen, Christophe Moy, Dominique Lemur
Abstract:
The Internet of Things (IoT) is developed to ensure monitoring and connectivity within different applications. Thus, it is critical to study the channel propagation characteristics in Low Power Wide Area Network (LPWAN), especially Long Range Wide Area Network (LoRaWAN). In this paper, an in-depth investigation of the reciprocity between the uplink and downlink Channel State Information (CSI) is done by performing an outdoor measurement campaign in the area of Campus Beaulieu in Rennes. At each different location, the CSI reciprocity is quantified using the Pearson Correlation Coefficient (PCC) which shows a very high linear correlation between the uplink and downlink CSI. This reciprocity feature could be utilized for the physical layer security between the node and the gateway. On the other hand, most of the CSI shapes from different locations are highly uncorrelated from each other. Hence, it can be anticipated that this could achieve significant localization gain by utilizing the frequency hopping in the LoRa systems by getting access to a wider band.Keywords: IoT, LPWAN, LoRa, effective signal power, onsite measurement
Procedia PDF Downloads 16213506 Digital Transformation: Actionable Insights to Optimize the Building Performance
Authors: Jovian Cheung, Thomas Kwok, Victor Wong
Abstract:
Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance
Procedia PDF Downloads 15713505 Systematic Review of Technology-Based Mental Health Solutions for Modelling in Low and Middle Income Countries
Authors: Mukondi Esther Nethavhakone
Abstract:
In 2020 World Health Organization announced the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), also known as Coronavirus disease 2019 (COVID-19) pandemic. To curb or contain the spread of the novel coronavirus (COVID 19), global governments implemented social distancing and lockdown regulations. Subsequently, it was no longer business as per usual, life as we knew it had changed, and so many aspects of people's lives were negatively affected, including financial and employment stability. Mainly, because companies/businesses had to put their operations on hold, some had to shut down completely, resulting in the loss of income for many people globally. Finances and employment insecurities are some of the issues that exacerbated many social issues that the world was already faced with, such as school drop-outs, teenage pregnancies, sexual assaults, gender-based violence, crime, child abuse, elderly abuse, to name a few. Expectedly the majority of the population's mental health state was threatened. This resulted in an increased number of people seeking mental healthcare services. The increasing need for mental healthcare services in Low and Middle-income countries proves to be a challenge because it is a well-known fact due to financial constraints and not well-established healthcare systems, mental healthcare provision is not as prioritised as the primary healthcare in these countries. It is against this backdrop that the researcher seeks to find viable, cost-effective, and accessible mental health solutions for low and middle-income countries amid the pressures of any pandemic. The researcher will undertake a systematic review of the technology-based mental health solutions that have been implemented/adopted by developed countries during COVID 19 lockdown and social distancing periods. This systematic review study aims to determine if low and middle-income countries can adopt the cost-effective version of digital mental health solutions for the healthcare system to adequately provide mental healthcare services during critical times such as pandemics (when there's an overwhelming diminish in mental health globally). The researcher will undertake a systematic review study through mixed methods. It will adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The mixed-methods uses findings from both qualitative and quantitative studies in one review study. It will be beneficial to conduct this kind of study using mixed methods because it is a public health topic that involves social interventions and it is not purely based on medical interventions. Therefore, the meta-ethnographic (qualitative data) analysis will be crucial in understanding why and which digital methods work and for whom does it work, rather than only the meta-analysis (quantitative data) providing what digital mental health methods works. The data collection process will be extensive, involving the development of a database, table of summary of evidence/findings, and quality assessment process lastly, The researcher will ensure that ethical procedures are followed and adhered to, ensuring that sensitive data is protected and the study doesn't pose any harm to the participants.Keywords: digital, mental health, covid, low and middle-income countries
Procedia PDF Downloads 9513504 A Study of Common Carotid Artery Behavior from B-Mode Ultrasound Image for Different Gender and BMI Categories
Authors: Nabilah Ibrahim, Khaliza Musa
Abstract:
The increment thickness of intima-media thickness (IMT) which involves the changes of diameter of the carotid artery is one of the early symptoms of the atherosclerosis lesion. The manual measurement of arterial diameter is time consuming and lack of reproducibility. Thus, this study reports the automatic approach to find the arterial diameter behavior for different gender, and body mass index (BMI) categories, focus on tracked region. BMI category is divided into underweight, normal, and overweight categories. Canny edge detection is employed to the B-mode image to extract the important information to be deal as the carotid wall boundary. The result shows the significant difference of arterial diameter between male and female groups which is 2.5% difference. In addition, the significant result of differences of arterial diameter for BMI category is the decreasing of arterial diameter proportional to the BMI.Keywords: B-mode Ultrasound Image, carotid artery diameter, canny edge detection, body mass index
Procedia PDF Downloads 44413503 Adversarial Attacks and Defenses on Deep Neural Networks
Authors: Jonathan Sohn
Abstract:
Deep neural networks (DNNs) have shown state-of-the-art performance for many applications, including computer vision, natural language processing, and speech recognition. Recently, adversarial attacks have been studied in the context of deep neural networks, which aim to alter the results of deep neural networks by modifying the inputs slightly. For example, an adversarial attack on a DNN used for object detection can cause the DNN to miss certain objects. As a result, the reliability of DNNs is undermined by their lack of robustness against adversarial attacks, raising concerns about their use in safety-critical applications such as autonomous driving. In this paper, we focus on studying the adversarial attacks and defenses on DNNs for image classification. There are two types of adversarial attacks studied which are fast gradient sign method (FGSM) attack and projected gradient descent (PGD) attack. A DNN forms decision boundaries that separate the input images into different categories. The adversarial attack slightly alters the image to move over the decision boundary, causing the DNN to misclassify the image. FGSM attack obtains the gradient with respect to the image and updates the image once based on the gradients to cross the decision boundary. PGD attack, instead of taking one big step, repeatedly modifies the input image with multiple small steps. There is also another type of attack called the target attack. This adversarial attack is designed to make the machine classify an image to a class chosen by the attacker. We can defend against adversarial attacks by incorporating adversarial examples in training. Specifically, instead of training the neural network with clean examples, we can explicitly let the neural network learn from the adversarial examples. In our experiments, the digit recognition accuracy on the MNIST dataset drops from 97.81% to 39.50% and 34.01% when the DNN is attacked by FGSM and PGD attacks, respectively. If we utilize FGSM training as a defense method, the classification accuracy greatly improves from 39.50% to 92.31% for FGSM attacks and from 34.01% to 75.63% for PGD attacks. To further improve the classification accuracy under adversarial attacks, we can also use a stronger PGD training method. PGD training improves the accuracy by 2.7% under FGSM attacks and 18.4% under PGD attacks over FGSM training. It is worth mentioning that both FGSM and PGD training do not affect the accuracy of clean images. In summary, we find that PGD attacks can greatly degrade the performance of DNNs, and PGD training is a very effective way to defend against such attacks. PGD attacks and defence are overall significantly more effective than FGSM methods.Keywords: deep neural network, adversarial attack, adversarial defense, adversarial machine learning
Procedia PDF Downloads 195