Search results for: artificial neural network modeling
918 Overall Assessment of Human Research and Ethics Committees in the United Arab Emirates
Authors: Mahera Abdulrahman, Satish Chandrasekhar Nair
Abstract:
Growing demand for human health research in the United Arab Emirates (UAE) has prompted the need to develop a robust research ethics oversight, particularly given the large unskilled-worker immigrant population and the elderly citizens utilizing health services. Examination of the structure, function, practices and outcomes of the human research ethics committees (HREC) was conducted using two survey instruments, reliable and validated. Results indicate that in the absence of a national ethics regulatory body, the individual emirate’s governed 21 HRECs covering health facilities and academic institutions in the UAE. Among the HRECs, 86% followed International Council for Harmonization-Good Clinical Practice guidelines, 57% have been in operation for more than five years, 81% reviewed proposals within eight weeks, 48% reviewed for clinical and scientific merit apart from ethics, and 43% handled more than 50 research proposals per year. However, researcher recognition, funding transparency, adverse event reporting systems were widespread in less than one-third of all HRECs. Surprisingly, intellectual property right was not included as a research output. Research was incorporated into the vision and mission statements of many (62%) organizations and, mechanisms such as research publications, collaborations, and recognitions were employed as key performance indicators to measure research output. In spite, resources to generate research output such as dedicated budget (19%), support staff (19%) and continuous training and mentoring program for medical residents and HREC members were somehow lacking. HREC structure and operations in the UAE are similar to other regions of the world, resources allocation for efficient, quality monitoring, continuous training, and the creation of a clinical research network are needed to strengthen the clinical research enterprise to scale up for the future. It is anticipated that the results of this study will benefit investigators, regulators, pharmaceutical sponsors and the policy makers in the region.Keywords: institutional review board, ethics committee, human research ethics, United Arab Emirates (UAE)
Procedia PDF Downloads 224917 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 163916 Use of Smartphones in 6th and 7th Grade (Elementary Schools) in Istria: Pilot Study
Authors: Maja Ruzic-Baf, Vedrana Keteles, Andrea Debeljuh
Abstract:
Younger and younger children are now using a smartphone, a device which has become ‘a must have’ and the life of children would be almost ‘unthinkable’ without one. Devices are becoming lighter and lighter but offering an array of options and applications as well as the unavoidable access to the Internet, without which it would be almost unusable. Numerous features such as taking of photographs, listening to music, information search on the Internet, access to social networks, usage of some of the chatting and messaging services, are only some of the numerous features offered by ‘smart’ devices. They have replaced the alarm clock, home phone, camera, tablet and other devices. Their use and possession have become a part of the everyday image of young people. Apart from the positive aspects, the use of smartphones has also some downsides. For instance, free time was usually spent in nature, playing, doing sports or other activities enabling children an adequate psychophysiological growth and development. The greater usage of smartphones during classes to check statuses on social networks, message your friends, play online games, are just some of the possible negative aspects of their application. Considering that the age of the population using smartphones is decreasing and that smartphones are no longer ‘foreign’ to children of pre-school age (smartphones are used at home or in coffee shops or shopping centers while waiting for their parents, playing video games often inappropriate to their age), particular attention must be paid to a very sensitive group, the teenagers who almost never separate from their ‘pets’. This paper is divided into two sections, theoretical and empirical ones. The theoretical section gives an overview of the pros and cons of the usage of smartphones, while the empirical section presents the results of a research conducted in three elementary schools regarding the usage of smartphones and, specifically, their usage during classes, during breaks and to search information on the Internet, check status updates and 'likes’ on the Facebook social network.Keywords: education, smartphone, social networks, teenagers
Procedia PDF Downloads 453915 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study
Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu
Abstract:
Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm
Procedia PDF Downloads 137914 Cancer Burden and Policy Needs in the Democratic Republic of the Congo: A Descriptive Study
Authors: Jean Paul Muambangu Milambo, Peter Nyasulu, John Akudugu, Leonidas Ndayisaba, Joyce Tsoka-Gwegweni, Lebwaze Massamba Bienvenu, Mitshindo Mwambangu Chiro
Abstract:
In 2018, non-communicable diseases (NCDs) were responsible for 48% of deaths in the Democratic Republic of Congo (DRC), with cancer contributing to 5% of these deaths. There is a notable absence of cancer registries, capacity-building activities, budgets, and treatment roadmaps in the DRC. Current cancer estimates are primarily based on mathematical modeling with limited data from neighboring countries. This study aimed to assess cancer subtype prevalence in Kinshasa hospitals and compare these findings with WHO model estimates. Methods: A retrospective observational study was conducted from 2018 to 2020 at HJ Hospitals in Kinshasa. Data were collected using American Cancer Society (ACS) questionnaires and physician logs. Descriptive analysis was performed using STATA version 16 to estimate cancer burden and provide evidence-based recommendations. Results: The results from the chart review at HJ Hospitals in Kinshasa (2018-2020) indicate that out of 6,852 samples, approximately 11.16% were diagnosed with cancer. The distribution of cancer subtypes in this cohort was as follows: breast cancer (33.6%), prostate cancer (21.8%), colorectal cancer (9.6%), lymphoma (4.6%), and cervical cancer (4.4%). These figures are based on histopathological confirmation at the facility and may not fully represent the broader population due to potential selection biases related to geographic and financial accessibility to the hospital. In contrast, the World Health Organization (WHO) model estimates for cancer prevalence in the DRC show different proportions. According to WHO data, the distribution of cancer types is as follows: cervical cancer (15.9%), prostate cancer (15.3%), breast cancer (14.9%), liver cancer (6.8%), colorectal cancer (5.9%), and other cancers (41.2%) (WHO, 2020). Conclusion: The data indicate a rising cancer prevalence in DRC but highlight significant gaps in clinical, biomedical, and genetic cancer data. The establishment of a population-based cancer registry (PBCR) and a defined cancer management pathway is crucial. The current estimates are limited due to data scarcity and inconsistencies in clinical practices. There is an urgent need for multidisciplinary cancer management, integration of palliative care, and improvement in care quality based on evidence-based measures.Keywords: cancer, risk factors, DRC, gene-environment interactions, survivors
Procedia PDF Downloads 21913 GIS Data Governance: GIS Data Submission Process for Build-in Project, Replacement Project at Oman electricity Transmission Company
Authors: Rahma Saleh Hussein Al Balushi
Abstract:
Oman Electricity Transmission Company's (OETC) vision is to be a renowned world-class transmission grid by 2025, and one of the indications of achieving the vision is obtaining Asset Management ISO55001 certification, which required setting out a documented Standard Operating Procedures (SOP). Hence, documented SOP for the Geographical information system data process has been established. Also, to effectively manage and improve OETC power transmission, asset data and information need to be governed as such by Asset Information & GIS department. This paper will describe in detail the current GIS data submission process and the journey for developing it. The methodology used to develop the process is based on three main pillars, which are system and end-user requirements, Risk evaluation, data availability, and accuracy. The output of this paper shows the dramatic change in the used process, which results subsequently in more efficient, accurate, and updated data. Furthermore, due to this process, GIS has been and is ready to be integrated with other systems as well as the source of data for all OETC users. Some decisions related to issuing No objection certificates (NOC) for excavation permits and scheduling asset maintenance plans in Computerized Maintenance Management System (CMMS) have been made consequently upon GIS data availability. On the Other hand, defining agreed and documented procedures for data collection, data systems update, data release/reporting and data alterations has also contributed to reducing the missing attributes and enhance data quality index of GIS transmission data. A considerable difference in Geodatabase (GDB) completeness percentage was observed between the years 2017 and year 2022. Overall, concluding that by governance, asset information & GIS department can control the GIS data process; collect, properly record, and manage asset data and information within the OETC network. This control extends to other applications and systems integrated with/related to GIS systems.Keywords: asset management ISO55001, standard procedures process, governance, CMMS
Procedia PDF Downloads 125912 Effect of Geometric Imperfections on the Vibration Response of Hexagonal Lattices
Authors: P. Caimmi, E. Bele, A. Abolfathi
Abstract:
Lattice materials are cellular structures composed of a periodic network of beams. They offer high weight-specific mechanical properties and lend themselves to numerous weight-sensitive applications. The periodic internal structure responds to external vibrations through characteristic frequency bandgaps, making these materials suitable for the reduction of noise and vibration. However, the deviation from architectural homogeneity, due to, e.g., manufacturing imperfections, has a strong influence on the mechanical properties and vibration response of these materials. In this work, we present results on the influence of geometric imperfections on the vibration response of hexagonal lattices. Three classes of geometrical variables are used: the characteristics of the architecture (relative density, ligament length/cell size ratio), imperfection type (degree of non-periodicity, cracks, hard inclusions) and defect morphology (size, distribution). Test specimens with controlled size and distribution of imperfections are manufactured through selective laser sintering. The Frequency Response Functions (FRFs) in the form of accelerance are measured, and the modal shapes are captured through a high-speed camera. The finite element method is used to provide insights on the extension of these results to semi-infinite lattices. An updating procedure is conducted to increase the reliability of numerical simulation results compared to experimental measurements. This is achieved by updating the boundary conditions and material stiffness. Variations in FRFs of periodic structures due to changes in the relative density of the constituent unit cell are analysed. The effects of geometric imperfections on the dynamic response of periodic structures are investigated. The findings can be used to open up the opportunity for tailoring these lattice materials to achieve optimal amplitude attenuations at specific frequency ranges.Keywords: lattice architectures, geometric imperfections, vibration attenuation, experimental modal analysis
Procedia PDF Downloads 122911 Virtual Metrology for Copper Clad Laminate Manufacturing
Authors: Misuk Kim, Seokho Kang, Jehyuk Lee, Hyunchang Cho, Sungzoon Cho
Abstract:
In semiconductor manufacturing, virtual metrology (VM) refers to methods to predict properties of a wafer based on machine parameters and sensor data of the production equipment, without performing the (costly) physical measurement of the wafer properties (Wikipedia). Additional benefits include avoidance of human bias and identification of important factors affecting the quality of the process which allow improving the process quality in the future. It is however rare to find VM applied to other areas of manufacturing. In this work, we propose to use VM to copper clad laminate (CCL) manufacturing. CCL is a core element of a printed circuit board (PCB) which is used in smartphones, tablets, digital cameras, and laptop computers. The manufacturing of CCL consists of three processes: Treating, lay-up, and pressing. Treating, the most important process among the three, puts resin on glass cloth, heat up in a drying oven, then produces prepreg for lay-up process. In this process, three important quality factors are inspected: Treated weight (T/W), Minimum Viscosity (M/V), and Gel Time (G/T). They are manually inspected, incurring heavy cost in terms of time and money, which makes it a good candidate for VM application. We developed prediction models of the three quality factors T/W, M/V, and G/T, respectively, with process variables, raw material, and environment variables. The actual process data was obtained from a CCL manufacturer. A variety of variable selection methods and learning algorithms were employed to find the best prediction model. We obtained prediction models of M/V and G/T with a high enough accuracy. They also provided us with information on “important” predictor variables, some of which the process engineers had been already aware and the rest of which they had not. They were quite excited to find new insights that the model revealed and set out to do further analysis on them to gain process control implications. T/W did not turn out to be possible to predict with a reasonable accuracy with given factors. The very fact indicates that the factors currently monitored may not affect T/W, thus an effort has to be made to find other factors which are not currently monitored in order to understand the process better and improve the quality of it. In conclusion, VM application to CCL’s treating process was quite successful. The newly built quality prediction model allowed one to reduce the cost associated with actual metrology as well as reveal some insights on the factors affecting the important quality factors and on the level of our less than perfect understanding of the treating process.Keywords: copper clad laminate, predictive modeling, quality control, virtual metrology
Procedia PDF Downloads 350910 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 363909 Ecosystem Services and Excess Water Management: Analysis of Ecosystem Services in Areas Exposed to Excess Water Inundation
Authors: Dalma Varga, Nora Hubayne H.
Abstract:
Nowadays, among the measures taken to offset the consequences of climate change, water resources management is one of the key tools, which can include excess water management. As a result of climate change’s effects and as a result of the frequent inappropriate landuse, more and more areas are affected by the excess water inundation. Hungary is located in the deepest part of the Pannonian Basin, which is exposed to water damage – especially lowland areas that are endangered by floods or excess waters. The periodical presence of excess water creates specific habitats in a given area, which have ecological, functional, and aesthetic values. Excess water inundation affects approximately 74% of Hungary’s lowland areas, of which about 46% is also under nature protection (such as national parks, protected landscape areas, nature conservation areas, Natura 2000 sites, etc.). These data prove that areas exposed to excess water inundation – which are predominantly characterized by agricultural land uses – have an important ecological role. Other research works have confirmed the presence of numerous rare and endangered plant species in drainage canals, on grasslands exposed to excess water, and on special agricultural fields with mud vegetation. The goal of this research is to define and analyze ecosystem services of areas exposed to excess water inundation. In addition to this, it is also important to determine the quantified indicators of these areas’ natural and landscape values besides the presence of protected species and the naturalness of habitats, so all in all, to analyze the various nature protections related to excess water. As a result, a practice-orientated assessment method has been developed that provides the ecological water demand, assimilates to ecological and habitat aspects, contributes to adaptive excess water management, and last but not least, increases or maintains the share of the green infrastructure network. In this way, it also contributes to reduce and mitigate the negative effects of climate change.Keywords: ecosystem services, landscape architecture, excess water management, green infrastructure planning
Procedia PDF Downloads 313908 Causal Inference Engine between Continuous Emission Monitoring System Combined with Air Pollution Forecast Modeling
Authors: Yu-Wen Chen, Szu-Wei Huang, Chung-Hsiang Mu, Kelvin Cheng
Abstract:
This paper developed a data-driven based model to deal with the causality between the Continuous Emission Monitoring System (CEMS, by Environmental Protection Administration, Taiwan) in industrial factories, and the air quality around environment. Compared to the heavy burden of traditional numerical models of regional weather and air pollution simulation, the lightweight burden of the proposed model can provide forecasting hourly with current observations of weather, air pollution and emissions from factories. The observation data are included wind speed, wind direction, relative humidity, temperature and others. The observations can be collected real time from Open APIs of civil IoT Taiwan, which are sourced from 439 weather stations, 10,193 qualitative air stations, 77 national quantitative stations and 140 CEMS quantitative industrial factories. This study completed a causal inference engine and gave an air pollution forecasting for the next 12 hours related to local industrial factories. The outcomes of the pollution forecasting are produced hourly with a grid resolution of 1km*1km on IIoTC (Industrial Internet of Things Cloud) and saved in netCDF4 format. The elaborated procedures to generate forecasts comprise data recalibrating, outlier elimination, Kriging Interpolation and particle tracking and random walk techniques for the mechanisms of diffusion and advection. The solution of these equations reveals the causality between factories emission and the associated air pollution. Further, with the aid of installed real-time flue emission (Total Suspension Emission, TSP) sensors and the mentioned forecasted air pollution map, this study also disclosed the converting mechanism between the TSP and PM2.5/PM10 for different region and industrial characteristics, according to the long-term data observation and calibration. These different time-series qualitative and quantitative data which successfully achieved a causal inference engine in cloud for factory management control in practicable. Once the forecasted air quality for a region is marked as harmful, the correlated factories are notified and asked to suppress its operation and reduces emission in advance.Keywords: continuous emission monitoring system, total suspension particulates, causal inference, air pollution forecast, IoT
Procedia PDF Downloads 87907 Management of Small-Scale Companies in Nigeria. Case Study of Problems Faced by Entrepreneurs
Authors: Aderemi, Moses Aderibigbe
Abstract:
The supply chain of a manufacturing company can be classified into three categories, namely: 1) supplier chain, these are a network of suppliers of raw materials, machinery, and other requirements for daily operations for the company; 2) internal chain, which are departmental or functional relationships within the organization like production, finance, marketing, logistic and quality control departments all interacting together to achieve the goals and objective of the company; and 3) customer chain; these are networks used for products distribution to the final consumer which includes the product distributors and retailers in the marketplace as may be applicable. In a developing country like Nigeria, where government infrastructures are poor or, in some cases, none in existence, the survival of a small-scale manufacturing company often depends on how effectively its supply chain is managed. In Nigeria, suppliers of machinery and raw materials to most manufacturing companies are from low-cost but high-tech countries like China or India. The problem with the supply chain from these countries apart from the language barrier between these countries and Nigeria, is also that of product quality and after-sales support services. The internal chain also requires funding to employ an experienced and trained workforce to deliver the company’s goals and objectives effectively and efficiently, which is always a challenge for small-scale manufacturers, including product marketing. In Nigeria, the management of the supply chain by small-scale manufacturers is further complicated by unfavourable government policies. This empirical research is a review and analysis of the supply chain management of a small-scale manufacturing company located in Lagos, Nigeria. The company's performance for the past five years has been on the decline and company management thinks there is a need for a review of its supply chain management for business survival. The company’s supply chain is analyzed and compared with best global practices in this research, and recommendations are made to the company management. The research outcome justifies the company’s need for a strategic change in its supply chain management for business sustainability and provides a learning point to small-scale manufacturing companies from developing countries in AfricaKeywords: management, small scale, supply chain, companies, leaders
Procedia PDF Downloads 23906 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System
Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu
Abstract:
Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model
Procedia PDF Downloads 111905 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing
Procedia PDF Downloads 130904 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 101903 Wind Generator Control in Isolated Site
Authors: Glaoui Hachemi
Abstract:
Wind has been proven as a cost effective and reliable energy source. Technological advancements over the last years have placed wind energy in a firm position to compete with conventional power generation technologies. Algeria has a vast uninhabited land area where the south (desert) represents the greatest part with considerable wind regime. In this paper, an analysis of wind energy utilization as a viable energy substitute in six selected sites widely distributed all over the south of Algeria is presented. In this presentation, wind speed frequency distributions data obtained from the Algerian Meteorological Office are used to calculate the average wind speed and the available wind power. The annual energy produced by the Fuhrlander FL 30 wind machine is obtained using two methods. The analysis shows that in the southern Algeria, at 10 m height, the available wind power was found to vary between 160 and 280 W/m2, except for Tamanrasset. The highest potential wind power was found at Adrar, with 88 % of the time the wind speed is above 3 m/s. Besides, it is found that the annual wind energy generated by that machine lie between 33 and 61 MWh, except for Tamanrasset, with only 17 MWh. Since the wind turbines are usually installed at a height greater than 10 m, an increased output of wind energy can be expected. However, the wind resource appears to be suitable for power production on the south and it could provide a viable substitute to diesel oil for irrigation pumps and electricity generation. In this paper, a model of the wind turbine (WT) with permanent magnet generator (PMSG) and its associated controllers is presented. The increase of wind power penetration in power systems has meant that conventional power plants are gradually being replaced by wind farms. In fact, today wind farms are required to actively participate in power system operation in the same way as conventional power plants. In fact, power system operators have revised the grid connection requirements for wind turbines and wind farms, and now demand that these installations be able to carry out more or less the same control tasks as conventional power plants. For dynamic power system simulations, the PMSG wind turbine model includes an aerodynamic rotor model, a lumped mass representation of the drive train system and generator model. In this paper, we propose a model with an implementation in MATLAB / Simulink, each of the system components off-grid small wind turbines.Keywords: windgenerator systems, permanent magnet synchronous generator (PMSG), wind turbine (WT) modeling, MATLAB simulink environment
Procedia PDF Downloads 337902 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones
Authors: Mohamed Abdelkareem
Abstract:
Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.Keywords: GIS, remote sensing, groundwater, Egypt
Procedia PDF Downloads 98901 Impact of Social Media in Shaping Perceptions on Filipino Muslim Identity
Authors: Anna Rhodora A. Solar, Jan Emil N. Langomez
Abstract:
Social Media plays a crucial role in influencing Philippine public opinion with regard to a variety of socio-political issues. This became evident in the massacre of 44 members of the Special Action Force (SAF 44) tasked by the Philippine government to capture one of the US Federal Bureau of Investigation’s most wanted terrorists. The incident was said to be perpetrated by members of the Moro Islamic Liberation Front and the Bangsamoro Islamic Freedom Fighters. Part of the online discourse within Philippine cyberspace sparked intense debates on Filipino Muslim identity, with several Facebook viral posts linking Islam as a factor to the tragic event. Facebook is considered to be the most popular social media platform in the Philippines. As such, this begs the question of the extent to which social media, specifically Facebook, shape the perceptions of Filipinos on Filipino Muslims. This study utilizes Habermas’ theory of communicative action as it offers an explanation on how public sphere such as social media could be a network for communicating information and points of view through free and open dialogue among equal citizens to come to an understanding or common perception. However, the paper argues that communicative action which is aimed at reaching understanding free from force, and strategic action which is aimed at convincing someone through argumentation may not necessarily be mutually exclusive since reaching an understanding can also be considered as a result of convincing someone through argumentation. Moreover, actors may clash one another in their ideas before reaching common understanding, hence the presence of force. Utilizing content analysis on the Facebook posts with Islamic component that went viral after the massacre of the SAF 44, this paper argues that framing the image of Filipino Muslims through Facebook reflects both communicative and strategic actions. Moreover, comment threads on viral posts manifest force albeit implicit.Keywords: communication, Muslim, Philippines, social media
Procedia PDF Downloads 402900 Statecraft: Building a Hindu Nationalist Intellectual Ecosystem in India
Authors: Anuradha Sajjanhar
Abstract:
The rise of authoritarian populist regimes has been accompanied by hardened nationalism and heightened divisions between 'us' and 'them'. Political actors reinforce these sentiments through coercion, but also through inciting fear about imagined threats and by transforming public discourse about policy concerns. Extremist ideas can penetrate national policy, as newly appointed intellectuals and 'experts' in knowledge-producing institutions, such as government committees, universities, and think tanks, succeed in transforming public discourse. While attacking left and liberal academics, universities, and the press, the current Indian government is building new institutions to provide authority to its particularly rigid, nationalist discourse. This paper examines the building of a Hindu-nationalist intellectual ecosystem in India, interrogating the key role of hyper-nationalist think tanks. While some are explicit about their political and ideological leanings, others claim neutrality and pursue their agenda through coded technocratic language and resonant historical narratives. Their key is to change thinking by normalizing it. Six years before winning the election in 2014, India’s Hindu-nationalist party, the BJP, put together its own network of elite policy experts. In a national newspaper, the vice-president of the BJP described this as an intentional shift: from 'being action-oriented to solidifying its ideological underpinnings in a policy framework'. When the BJP came to power in 2014, 'experts' from these think tanks filled key positions in the central government. The BJP has since been circulating dominant ideas of Hindu supremacy through regional parties, grassroots political organisations, and civil society organisations. These think tanks have the authority to articulate and legitimate Hindu nationalism within a credible technocratic policy framework. This paper is based on ethnography and over 50 interviews in New Delhi, before and after the BJP’s staggering election victory in 2019. It outlines the party’s attempt to take over existing institutions while developing its own cadre of nationalist policy-making professionals.Keywords: ideology, politics, South Asia, technocracy
Procedia PDF Downloads 120899 Design and Synthesis of Fully Benzoxazine-Based Porous Organic Polymer Through Sonogashira Coupling Reaction for CO₂ Capture and Energy Storage Application
Authors: Mohsin Ejaz, Shiao-Wei Kuo
Abstract:
The growing production and exploitation of fossil fuels have placed human society in serious environmental issues. As a result, it's critical to design efficient and eco-friendly energy production and storage techniques. Porous organic polymers (POPs) are multi-dimensional porous network materials developed through the formation of covalent bonds between different organic building blocks that possess distinct geometries and topologies. POPs have tunable porosities and high surface area making them a good candidate for an effective electrode material in energy storage applications. Herein, we prepared a fully benzoxazine-based porous organic polymers (TPA–DHTP–BZ POP) through sonogashira coupling of dihydroxyterephthalaldehyde (DHPT) and triphenylamine (TPA) containing benzoxazine (BZ) monomers. Firstly, both BZ monomers (TPA-BZ-Br and DHTP-BZ-Ea) were synthesized by three steps, including Schiff base, reduction, and mannich condensation reaction. Finally, the TPA–DHTP–BZ POP was prepared through the sonogashira coupling reaction of brominated monomer (TPA-BZ-Br) and ethynyl monomer (DHTP-BZ-Ea). Fourier transform infrared (FTIR) and solid-state nuclear magnetic resonance (NMR) spectroscopy confirmed the successful synthesis of monomers as well as POP. The porosity of TPA–DHTP–BZ POP was investigated by the N₂ absorption technique and showed a Brunauer–Emmett–Teller (BET) surface area of 196 m² g−¹, pore size 2.13 nm and pore volume of 0.54 cm³ g−¹, respectively. The TPA–DHTP–BZ POP experienced thermal ring-opening polymerization, resulting in poly (TPA–DHTP–BZ) POP having strong inter and intramolecular hydrogen bonds formed by phenolic groups and Mannich bridges, thereby enhancing CO₂ capture and supercapacitive performance. The poly(TPA–DHTP–BZ) POP demonstrated a remarkable CO₂ capture of 3.28 mmol g−¹ and a specific capacitance of 67 F g−¹ at 0.5 A g−¹. Thus, poly(TPA–DHTP–BZ) POP could potentially be used for energy storage and CO₂ capture applications.Keywords: porous organic polymer, benzoxazine, sonogashira coupling, CO₂, supercapacitor
Procedia PDF Downloads 73898 Facilitating the Learning Environment as a Servant Leader: Empowering Self-Directed Student Learning
Authors: Thomas James Bell III
Abstract:
Pedagogy is thought of as one's philosophy, theory, or teaching method. This study examines the science of learning, considering the forced reconsideration of effective pedagogy brought on by the aftermath of the 2020 coronavirus pandemic. With the aid of various technologies, online education holds challenges and promises to enhance the learning environment if implemented to facilitate student learning. Behaviorism centers around the belief that the instructor is the sage on the classroom stage using repetition techniques as the primary learning instrument. This approach to pedagogy ascribes complete control of the learning environment and works best for students to learn by allowing students to answer questions with immediate feedback. Such structured learning reinforcement tends to guide students' learning without considering learners' independence and individual reasoning. And such activities may inadvertently stifle the student's ability to develop critical thinking and self-expression skills. Fundamentally liberationism pedagogy dismisses the concept that education is merely about students learning things and more about the way students learn. Alternatively, the liberationist approach democratizes the classroom by redefining the role of the teacher and student. The teacher is no longer viewed as the sage on the stage but as a guide on the side. Instead, this approach views students as creators of knowledge and not empty vessels to be filled with knowledge. Moreover, students are well suited to decide how best to learn and which areas improvements are needed. This study will explore the classroom instructor as a servant leader in the twenty-first century, which allows students to integrate technology that encapsulates more individual learning styles. The researcher will examine the Professional Scrum Master (PSM I) exam pass rate results of 124 students in six sections of an Agile scrum course. The students will be separated into two groups; the first group will follow a structured instructor-led course outlined by a course syllabus. The second group will consist of several small teams (ten or fewer) of self-led and self-empowered students. The teams will conduct several event meetings that include sprint planning meetings, daily scrums, sprint reviews, and retrospective meetings throughout the semester will the instructor facilitating the teams' activities as needed. The methodology for this study will use the compare means t-test to compare the mean of an exam pass rate in one group to the mean of the second group. A one-tailed test (i.e., less than or greater than) will be used with the null hypothesis, for the difference between the groups in the population will be set to zero. The major findings will expand the pedagogical approach that suggests pedagogy primarily exist in support of teacher-led learning, which has formed the pillars of traditional classroom teaching. But in light of the fourth industrial revolution, there is a fusion of learning platforms across the digital, physical, and biological worlds with disruptive technological advancements in areas such as the Internet of Things (IoT), artificial intelligence (AI), 3D printing, robotics, and others.Keywords: pedagogy, behaviorism, liberationism, flipping the classroom, servant leader instructor, agile scrum in education
Procedia PDF Downloads 142897 The Role of Land Consolidation to Reduce Soil Degradation in the Czech Republic
Authors: Miroslav Dumbrovsky
Abstract:
The paper deals with positive impacts of land consolidation on decreasing soil degradation with the main emphasis on soil and water conservation in the landscape. The importance of land degradation is very high because of its impact on crop productivity and many other adverse effects. Soil degradation through soil erosion is causing losses in crop productivity and quality of the environment, through decreasing quality of soil and water (especially water resources). Negative effects of conventional farming practices are increased water erosion, as well as crusting and compaction of the topsoil and subsoil. Soil erosion caused by water destructs the soil’s structure, reduces crop productivity due to deterioration in soil physical and chemical properties such as infiltration rate, water holding capacity, loss of nutrients needed for crop production, and loss of soil carbon. Recently, a new process of complex land consolidation in the Czech Republic has provided a unique opportunity for improving the quality of the environment and sustainability of the crop production by means a better soil and water conservation. The present process of the complex land consolidation is not only a reallocation of plots, but this system consists of a new layout of plots within a certain territory, aimed at establishing the integrated land-use economic units, based on the needs of individual landowners and land users. On the other hand, the interests of the general public and the environmental protection have to be solved, too. From the general point of view, a large part of the Czech landscape shall be reconstructed in the course of complex land consolidation projects. These projects will be based on new integrated soil-economic units, spatially arranged in a designed multifunctional system of soil and water conservation measures, such as path network and a territorial system of ecological stability, according to structural changes in agriculture. This new approach will be the basis of a rational economic utilization of the region which will comply with the present ecological and aesthetic demands at present.Keywords: soil degradation, land consolidation, soil erosion, soil conservation
Procedia PDF Downloads 356896 Humins: From Industrial By-Product to High Value Polymers
Authors: Pierluigi Tosi, Ed de Jong, Gerard van Klink, Luc Vincent, Alice Mija
Abstract:
During the last decades renewable and low-cost resources have attracted increasingly interest. Carbohydrates can be derived by lignocellulosic biomasses, which is an attractive option since they represent the most abundant carbon source available in nature. Carbohydrates can be converted in a plethora of industrially relevant compounds, such as 5-hydroxymethylfurfural (HMF) and levulinic acid (LA), within acid catalyzed dehydration of sugars with mineral acids. Unfortunately, these acid catalyzed conversions suffer of the unavoidable formation of highly viscous heterogeneous poly-disperse carbon based materials known as humins. This black colored low value by-product is made by a complex mixture of macromolecules built by covalent random condensations of the several compounds present during the acid catalyzed conversion. Humins molecular structure is still under investigation but seems based on furanic rings network linked by aliphatic chains and decorated by several reactive moieties (ketones, aldehydes, hydroxyls, …). Despite decades of research, currently there is no way to avoid humins formation. The key parameter for enhance the economic viability of carbohydrate conversion processes is, therefore, increasing the economic value of the humins by-product. Herein are presented new humins based polymeric materials that can be prepared starting from the raw by-product by thermal treatment, without any step of purification or pretreatment. Humins foams can be produced with the control of reaction key parameters, obtaining polymeric porous materials with designed porosity, density, thermal and electrical conductivity, chemical and electrical stability, carbon amount and mechanical properties. Physico chemical properties can be enhanced by modifications on the starting raw material or adding different species during the polymerization. A comparisons on the properties of different compositions will be presented, along with tested applications. The authors gratefully acknowledge the European Community for financial support through Marie-Curie H2020-MSCA-ITN-2015 "HUGS" Project.Keywords: by-product, humins, polymers, valorization
Procedia PDF Downloads 143895 CFD Modeling of Stripper Ash Cooler of Circulating Fluidized Bed
Authors: Ravi Inder Singh
Abstract:
Due to high heat transfer rate, high carbon utilizing efficiency, fuel flexibilities and other advantages numerous circulating fluidized bed boilers have grown up in India in last decade. Many companies like BHEL, ISGEC, Thermax, Cethar Limited, Enmas GB Power Systems Projects Limited are making CFBC and installing the units throughout the India. Due to complexity many problems exists in CFBC units and only few have been reported. Agglomeration i.e clinker formation in riser, loop seal leg and stripper ash coolers is one of problem industry is facing. Proper documentation is rarely found in the literature. Circulating fluidized bed (CFB) boiler bottom ash contains large amounts of physical heat. While the boiler combusts the low-calorie fuel, the ash content is normally more than 40% and the physical heat loss is approximately 3% if the bottom ash is discharged without cooling. In addition, the red-hot bottom ash is bad for mechanized handling and transportation, as the upper limit temperature of the ash handling machinery is 200 °C. Therefore, a bottom ash cooler (BAC) is often used to treat the high temperature bottom ash to reclaim heat, and to have the ash easily handled and transported. As a key auxiliary device of CFB boilers, the BAC has a direct influence on the secure and economic operation of the boiler. There are many kinds of BACs equipped for large-scale CFB boilers with the continuous development and improvement of the CFB boiler. These ash coolers are water cooled ash cooling screw, rolling-cylinder ash cooler (RAC), fluidized bed ash cooler (FBAC).In this study prototype of a novel stripper ash cooler is studied. The Circulating Fluidized bed Ash Coolers (CFBAC) combined the major technical features of spouted bed and bubbling bed, and could achieve the selective discharge on the bottom ash. The novel stripper ash cooler is bubbling bed and it is visible cold test rig. The reason for choosing cold test is that high temperature is difficult to maintain and create in laboratory level. The aim of study to know the flow pattern inside the stripper ash cooler. The cold rig prototype is similar to stripper ash cooler used industry and it was made after scaling down to some parameter. The performance of a fluidized bed ash cooler is studied using a cold experiment bench. The air flow rate, particle size of the solids and air distributor type are considered to be the key parameters of the operation of a fluidized bed ash cooler (FBAC) are studied in this.Keywords: CFD, Eulerian-Eulerian, Eulerian-Lagraingian model, parallel simulations
Procedia PDF Downloads 510894 Improving the Supply Chain of Vietnamese Coffee in Buon Me Thuot City, Daklak Province, Vietnam to Achieve Sustainability
Authors: Giang Ngo Tinh Nguyen
Abstract:
Agriculture plays an important role in the economy of Vietnam and coffee is one of most crucial agricultural commodities for exporting but the current farming methods and processing infrastructure could not keep up with the development of the sector. There are many catastrophic impacts on the environment such as deforestation; soil degradation that leads to a decrease in the quality of coffee beans. Therefore, improving supply chain to develop the cultivation of sustainable coffee is one of the most important strategies to boost the coffee industry and create a competitive advantage for Vietnamese coffee in the worldwide market. If all stakeholders in the supply chain network unite together; the sustainable production of coffee will be scaled up and the future of coffee industry will be firmly secured. Buon Ma Thuot city, Dak Lak province is the principal growing region for Vietnamese coffee which accounted for a third of total coffee area in Vietnam. It plays a strategically crucial role in the development of sustainable Vietnamese coffee. Thus, the research is to improve the supply chain of sustainable Vietnamese coffee production in Buon Ma Thuot city, Dak Lak province, Vietnam for the purpose of increasing the yields and export availability as well as helping coffee farmers to be more flexible in an ever-changing market situation. It will help to affirm Vietnamese coffee brand when entering international market; improve the livelihood of farmers and conserve the environment of this area. Besides, after analyzing the data, a logistic regression model is established to explain the relationship between the dependent variable and independent variables to help sustainable coffee organizations forecast the probability of farmer will be having a sustainable certificate with their current situation and help them choose promising candidates to develop sustainable programs. It investigates opinions of local farmers through quantitative surveys. Qualitative interviews are also used to interview local collectors and staff of Trung Nguyen manufacturing company to have an overview of the situation.Keywords: supply chain management, sustainable agricultural development, sustainable coffee, Vietnamese coffee
Procedia PDF Downloads 448893 Use of Social Media in Political Communications: Example of Facebook
Authors: Havva Nur Tarakci, Bahar Urhan Torun
Abstract:
The transformation that is seen in every area of life by technology, especially internet technology changes the structure of political communications too. Internet, which is at the top of new communication technologies, affects political communications with its structure in a way that no traditional communication tools ever have and enables interaction and the channel between receiver and sender, and it becomes one of the most effective tools preferred among the political communication applications. This state as a result of technological convergence makes Internet an unobtainable place for political communication campaigns. Political communications, which means every kind of communication strategies that political parties called 'actors of political communications' use with the aim of messaging their opinions and party programmes to their present and potential voters who are a target group for them, is a type of communication that is frequently used also among social media tools at the present day. The electorate consisting of different structures is informed, directed, and managed by social media tools. Political parties easily reach their electorate by these tools without any limitations of both time and place and also are able to take the opinions and reactions of their electorate by the element of interaction that is a feature of social media. In this context, Facebook, which is a place that political parties use in social media at most, is a communication network including in our daily life since 2004. As it is one of the most popular social networks today, it is among the most-visited websites in the global scale. In this way, the research is based on the question, “How do the political parties use Facebook at the campaigns, which they conduct during the election periods, for informing their voters?” and it aims at clarifying the Facebook using practices of the political parties. In direction of this objective the official Facebook accounts of the four political parties (JDP–AKParti, PDP–BDP, RPP-CHP, NMP-MHP), which reach their voters by social media besides other communication tools, are treated, and a frame for the politics of Turkey is formed. The time of examination is constricted with totally two weeks, one week before the mayoral elections and one week after the mayoral elections, when it is supposed that the political parties use their Facebook accounts in full swing. As a research method, the method of content analysis is preferred, and the texts and the visual elements that are gotten are interpreted based on this analysis.Keywords: Facebook, political communications, social media, electrorate
Procedia PDF Downloads 383892 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background
Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong
Abstract:
Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.Keywords: deep learning, image fusion, image generation, layout analysis
Procedia PDF Downloads 157891 Open Innovation in SMEs: A Multiple Case Study of Collaboration between Start-ups and Craft Enterprises
Authors: Carl-Philipp Valentin Beichert, Marcel Seger
Abstract:
Digital transformation and climate change require small and medium-sized enterprises (SME) to rethink their way of doing business. Inter-firm collaboration is recognized as helpful means of promoting innovation and competitiveness. In this context, collaborations with start-ups offer valuable opportunities through their innovative products, services, and business models. SMEs, and in particular German craft enterprises, play an important role in the country’s society and economy. Companies in this heterogeneous economic sector have unique characteristics and are limited in their ability to innovate due to their small size and lack of resources. Collaborating with start-ups could help to overcome these shortcomings. To investigate how collaborations emerge and what factors are decisive to successfully drive collaboration, we apply an explorative, qualitative research design. A sample of ten case studies was selected, with the collaboration between a start-up and a craft enterprise forming the unit of analysis. Semi-structured interviews with 20 company representatives allow for a two-sided perspective on the respective collaboration. The interview data is enriched by publicly available data and three expert interviews. As a result, objectives, initiation practices, applied collaboration types, barriers, as well as key success factors could be identified. The results indicate a three-phase collaboration process comprising an initiation, concept, and partner phase (ICP). The ICP framework proposed accordingly highlights the success factors (personal fit, communication, expertise, structure, network) for craft enterprises and start-ups for each collaboration phase. The role of a mediator in the start-up company, with strong expertise in the respective craft sector, is considered an important lever for overcoming barriers such as cultural and communication differences. The ICP framework thus provides promising directions for further research and can help practitioners establish successful collaborations.Keywords: open innovation, SME, craft businesses, startup collaboration, qualitative research
Procedia PDF Downloads 93890 Integrated Safety Net Program for High-Risk Families in New Taipei City
Authors: Peifang Hsieh
Abstract:
New Taipei city faces increasing number of migrant families, in which the needs of children are sometimes neglected due to insufficient support from communities. Moreover, the traditional mindset of disengagement discourages citizens from preemptively identifying families in need in their communities, resulting in delay of prompt intervention from authorities concerned. To safeguard these vulnerable families, New Taipei city develops the 'Integrated Safety-Net Program for High-Risk Families' from 2011 by implementing the following measures: (A) New attitude and action: Instead of passively receiving reported case of high-risk families, the program takes proactive and preemptive approach to detect and respond at early stage, so the cases are prevented from worsening. In addition, cross-departmental integration mechanism is established to meet multiple needs of high-risk families. The children number added to the government care network is greatly increased to over 10,000, which is around 4.4 times the original number before the program. (B) New service points: 2000 city-wide convenience stores are added as service stations so that children in less privileged families can go to any of 24-hour convenience stores across the city to pick up free meals. This greatly increases the approachability to high-risk families. Moreover, the social welfare institutes will be notified with information left in convenience stores by children and follow up with further assistance, greatly enhancing chances of less privileged families being identified. (C) New Key Figures: Mobilize community officers and volunteers to detect and offer on-site assistance. Volunteer organizations within communities are connected to report and offer follow-up services in a more active manner. In total, from 2011 to 2015, 54,789 cases are identified through active care, benefiting 82,124 children. In addition, 87.49% family-cases in the program receiving comprehensive social assistance are no longer at high risk.Keywords: cross department, high-risk families, public-private partnership, integrated safety net
Procedia PDF Downloads 299889 Assessment of Agricultural Land Use Land Cover, Land Surface Temperature and Population Changes Using Remote Sensing and GIS: Southwest Part of Marmara Sea, Turkey
Authors: Melis Inalpulat, Levent Genc
Abstract:
Land Use Land Cover (LULC) changes due to human activities and natural causes have become a major environmental concern. Assessment of temporal remote sensing data provides information about LULC impacts on environment. Land Surface Temperature (LST) is one of the important components for modeling environmental changes in climatological, hydrological, and agricultural studies. In this study, LULC changes (September 7, 1984 and July 8, 2014) especially in agricultural lands together with population changes (1985-2014) and LST status were investigated using remotely sensed and census data in South Marmara Watershed, Turkey. LULC changes were determined using Landsat TM and Landsat OLI data acquired in 1984 and 2014 summers. Six-band TM and OLI images were classified using supervised classification method to prepare LULC map including five classes including Forest (F), Grazing Land (G), Agricultural Land (A), Water Surface (W), and Residential Area-Bare Soil (R-B) classes. The LST image was also derived from thermal bands of the same dates. LULC classification results showed that forest areas, agricultural lands, water surfaces and residential area-bare soils were increased as 65751 ha, 20163 ha, 1924 ha and 20462 ha respectively. In comparison, a dramatic decrement occurred in grazing land (107985 ha) within three decades. The population increased % 29 between years 1984-2014 in whole study area. Along with the natural causes, migration also caused this increase since the study area has an important employment potential. LULC was transformed among the classes due to the expansion in residential, commercial and industrial areas as well as political decisions. In the study, results showed that agricultural lands around the settlement areas transformed to residential areas in 30 years. The LST images showed that mean temperatures were ranged between 26-32 °C in 1984 and 27-33 °C in 2014. Minimum temperature of agricultural lands was increased 3 °C and reached to 23 °C. In contrast, maximum temperature of A class decreased to 41 °C from 44 °C. Considering temperatures of the 2014 R-B class and 1984 status of same areas, it was seen that mean, min and max temperatures increased by 2 °C. As a result, the dynamism of population, LULC and LST resulted in increasing mean and maximum surface temperatures, living spaces/industrial areas and agricultural lands.Keywords: census data, landsat, land surface temperature (LST), land use land cover (LULC)
Procedia PDF Downloads 392