Search results for: platform capitalism
1490 Global Capitalism and Commodification of Breastfeeding: An Investigation of Its Impact on the “Traditional” African Conception of Family Life and Motherhood
Authors: Mosito Jonas Seabela
Abstract:
Breastfeeding in public has become a contentious issue in contemporary society. Mothers are often subjected to unfair discrimination and harassment for simply responding to their maternal instinct to breastfeed their infants. The unwillingness of society to accept public breastfeeding as a natural, non-sexual act is partly influenced by the imposition of a pornified and hypersexualised Western culture, which was imported to Africa through colonisation, enforced by the apartheid regime, and is now perpetuated by Western media. The imposition of the modern nuclear family on Africans, and the coerced aspiration to subscribe to bourgeois values, has eroded the moral standing of the traditional African family and its cultural values. Western-centric perceptions of African women have altered the experience of motherhood for many, commodifying the practice of breastfeeding. As a result, the use of bottles and infant formulas is often perceived as the preferred method, while breastfeeding in public is viewed as primitive, immoral, and unacceptable. This normative study seeks to answer the question of what ought to be done to preserve the dignity of African motherhood and protect their right to breastfeed in public. The African philosophy of Ubuntu is employed to advocate for the right to breastfeed in public. This moral philosophy posits that the western perception of a person seeks to isolate people from their environment and culture, thereby undermining the process of acquiring humanity, which fosters social cohesion. The Ubuntu philosophy embodies the aphorism, “umuntu ngumuntu nga bantu”, meaning “a person is a person through other persons”, signifying people’s interconnectedness and interdependence. The application of the key principles of Ubuntu, such as “survival, the spirit of solidarity, compassion, respect, and dignity” can improve human interaction and unite the public to support the government’s efforts to increase exclusive breastfeeding rates and reduce infant mortality rates. A doctrine called “Ubuntu Lactivism” is what the author proposes as a means to advocate for breastfeeding rights in fulfilment of African traditional values.Keywords: ubuntu, breastfeeding, Afrocentric, colonization, culture, motherhood, imperialism, objectification
Procedia PDF Downloads 731489 Application of the Carboxylate Platform in the Consolidated Bioconversion of Agricultural Wastes to Biofuel Precursors
Authors: Sesethu G. Njokweni, Marelize Botes, Emile W. H. Van Zyl
Abstract:
An alternative strategy to the production of bioethanol is by examining the degradability of biomass in a natural system such as the rumen of mammals. This anaerobic microbial community has higher cellulolytic activities than microbial communities from other habitats and degrades cellulose to produce volatile fatty acids (VFA), methane and CO₂. VFAs have the potential to serve as intermediate products for electrochemical conversion to hydrocarbon fuels. In vitro mimicking of this process would be more cost-effective than bioethanol production as it does not require chemical pre-treatment of biomass, a sterile environment or added enzymes. The strategies of the carboxylate platform and the co-cultures of a bovine ruminal microbiota from cannulated cows were combined in order to investigate and optimize the bioconversion of agricultural biomass (apple and grape pomace, citrus pulp, sugarcane bagasse and triticale straw) to high value VFAs as intermediates for biofuel production in a consolidated bioprocess. Optimisation of reactor conditions was investigated using five different ruminal inoculum concentrations; 5,10,15,20 and 25% with fixed pH at 6.8 and temperature at 39 ˚C. The ANKOM 200/220 fiber analyser was used to analyse in vitro neutral detergent fiber (NDF) disappearance of the feedstuffs. Fresh and cryo-frozen (5% DMSO and 50% glycerol for 3 months) rumen cultures were tested for the retainment of fermentation capacity and durability in 72 h fermentations in 125 ml serum vials using a FURO medical solutions 6-valve gas manifold to induce anaerobic conditions. Fermentation of apple pomace, triticale straw, and grape pomace showed no significant difference (P > 0.05) in the effect of 15 and 20 % inoculum concentrations for the total VFA yield. However, high performance liquid chromatographic separation within the two inoculum concentrations showed a significant difference (P < 0.05) in acetic acid yield, with 20% inoculum concentration being the optimum at 4.67 g/l. NDF disappearance of 85% in 96 h and total VFA yield of 11.5 g/l in 72 h (A/P ratio = 2.04) for apple pomace entailed that it was the optimal feedstuff for this process. The NDF disappearance and VFA yield of DMSO (82% NDF disappearance and 10.6 g/l VFA) and glycerol (90% NDF disappearance and 11.6 g/l VFA) stored rumen also showed significantly similar degradability of apple pomace with lack of treatment effect differences compared to a fresh rumen control (P > 0.05). The lack of treatment effects was a positive sign in indicating that there was no difference between the stored samples and the fresh rumen control. Retaining of the fermentation capacity within the preserved cultures suggests that its metabolic characteristics were preserved due to resilience and redundancy of the rumen culture. The amount of degradability and VFA yield within a short span was similar to other carboxylate platforms that have longer run times. This study shows that by virtue of faster rates and high extent of degradability, small scale alternatives to bioethanol such as rumen microbiomes and other natural fermenting microbiomes can be employed to enhance the feasibility of biofuels large-scale implementation.Keywords: agricultural wastes, carboxylate platform, rumen microbiome, volatile fatty acids
Procedia PDF Downloads 1301488 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data
Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira
Abstract:
Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC
Procedia PDF Downloads 1261487 Quantitative Seismic Interpretation in the LP3D Concession, Central of the Sirte Basin, Libya
Authors: Tawfig Alghbaili
Abstract:
LP3D Field is located near the center of the Sirt Basin in the Marada Trough approximately 215 km south Marsa Al Braga City. The Marada Trough is bounded on the west by a major fault, which forms the edge of the Beda Platform, while on the east, a bounding fault marks the edge of the Zelten Platform. The main reservoir in the LP3D Field is Upper Paleocene Beda Formation. The Beda Formation is mainly limestone interbedded with shale. The reservoir average thickness is 117.5 feet. To develop a better understanding of the characterization and distribution of the Beda reservoir, quantitative seismic data interpretation has been done, and also, well logs data were analyzed. Six reflectors corresponding to the tops of the Beda, Hagfa Shale, Gir, Kheir Shale, Khalifa Shale, and Zelten Formations were picked and mapped. Special work was done on fault interpretation part because of the complexities of the faults at the structure area. Different attribute analyses were done to build up more understanding of structures lateral extension and to view a clear image of the fault blocks. Time to depth conversion was computed using velocity modeling generated from check shot and sonic data. The simplified stratigraphic cross-section was drawn through the wells A1, A2, A3, and A4-LP3D. The distribution and the thickness variations of the Beda reservoir along the study area had been demonstrating. Petrophysical analysis of wireline logging also was done and Cross plots of some petrophysical parameters are generated to evaluate the lithology of reservoir interval. Structure and Stratigraphic Framework was designed and run to generate different model like faults, facies, and petrophysical models and calculate the reservoir volumetric. This study concluded that the depth structure map of the Beda formation shows the main structure in the area of study, which is north to south faulted anticline. Based on the Beda reservoir models, volumetric for the base case has been calculated and it has STOIIP of 41MMSTB and Recoverable oil of 10MMSTB. Seismic attributes confirm the structure trend and build a better understanding of the fault system in the area.Keywords: LP3D Field, Beda Formation, reservoir models, Seismic attributes
Procedia PDF Downloads 2141486 Sound Source Localisation and Augmented Reality for On-Site Inspection of Prefabricated Building Components
Authors: Jacques Cuenca, Claudio Colangeli, Agnieszka Mroz, Karl Janssens, Gunther Riexinger, Antonio D'Antuono, Giuseppe Pandarese, Milena Martarelli, Gian Marco Revel, Carlos Barcena Martin
Abstract:
This study presents an on-site acoustic inspection methodology for quality and performance evaluation of building components. The work focuses on global and detailed sound source localisation, by successively performing acoustic beamforming and sound intensity measurements. A portable experimental setup is developed, consisting of an omnidirectional broadband acoustic source and a microphone array and sound intensity probe. Three main acoustic indicators are of interest, namely the sound pressure distribution on the surface of components such as walls, windows and junctions, the three-dimensional sound intensity field in the vicinity of junctions, and the sound transmission loss of partitions. The measurement data is post-processed and converted into a three-dimensional numerical model of the acoustic indicators with the help of the simultaneously acquired geolocation information. The three-dimensional acoustic indicators are then integrated into an augmented reality platform superimposing them onto a real-time visualisation of the spatial environment. The methodology thus enables a measurement-supported inspection process of buildings and the correction of errors during construction and refurbishment. Two experimental validation cases are shown. The first consists of a laboratory measurement on a full-scale mockup of a room, featuring a prefabricated panel. The latter is installed with controlled defects such as lack of insulation and joint sealing material. It is demonstrated that the combined acoustic and augmented reality tool is capable of identifying acoustic leakages from the building defects and assist in correcting them. The second validation case is performed on a prefabricated room at a near-completion stage in the factory. With the help of the measurements and visualisation tools, the homogeneity of the partition installation is evaluated and leakages from junctions and doors are identified. Furthermore, the integration of acoustic indicators together with thermal and geometrical indicators via the augmented reality platform is shown.Keywords: acoustic inspection, prefabricated building components, augmented reality, sound source localization
Procedia PDF Downloads 3831485 The Regulation of Reputational Information in the Sharing Economy
Authors: Emre Bayamlıoğlu
Abstract:
This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy
Procedia PDF Downloads 4651484 From Madrassah to Elite Schools; The Political Economy of Pluralistic Educational Systems in Pakistan
Authors: Ahmad Zia
Abstract:
This study problematizes the notion that the pluralistic educational system in Pakistan fosters equality. Instead, it argues that this system not only reflects but also sustains existing class divisions, with implications for the future economic and social mobility of children. The primary goal of this study is to explore unequal access to educational opportunities in Pakistan. By examining the intersection between education and socioeconomic status, it attempts to explore the implications of key disparities in different tiers of education systems in Pakistan like between madrassahs, public schools and private schools, with an emphasis on how these institutions contribute to the maintenance of class hierarchies. This is a primary data based case study and the most recent data has been directly gathered Qualitative methods have been used to collect data from the units of data collection (UDCs). it have used Bourdieu’s theory as a leading framework. Its application in the context of country like Pakistan is very productive. it choose the thematic analysis method to analyse the data. This process helped me to identify relevant main themes and subthemes emerging from my data, which could comprise my analysis. Findings reveal that the educational landscape in Pakistan is deeply divided having far-reaching implications for social mobility and access to opportunities. This study found profound disparities among various educational institutions with respect to widening socioeconomic divides. Every kind of educational institution operates in a distinct socio-cultural and economic environment. Therefore, access to quality education is highly stratified and remains a privilege for only those who can afford it. This widens the socioeconomic gap that already exists. There has not been an extensive investigation of the relationship between pluralistic educations with class stratification in the literature so far. This study adds to a multifaceted understanding of educational disparities in Pakistan by analysing the intersections between socioeconomic divisions and educational access. It offers valuable theoretical and practical insights into the subject. This study provides theoretical concepts and empirical data to enhance scholars' understanding of socioeconomic inequality, specifically in relation to education systems.Keywords: social inequality, pluralism, class divide, capitalism, globalisation, elitism, education
Procedia PDF Downloads 101483 A Multi Cordic Architecture on FPGA Platform
Authors: Ahmed Madian, Muaz Aljarhi
Abstract:
Coordinate Rotation Digital Computer (CORDIC) is a unique digital computing unit intended for the computation of mathematical operations and functions. This paper presents a multi-CORDIC processor that integrates different CORDIC architectures on a single FPGA chip and allows the user to select the CORDIC architecture to proceed with based on what he wants to calculate and his/her needs. Synthesis show that radix 2 CORDIC has the lowest clock delay, radix 8 CORDIC has the highest LUT usage and lowest register usage while Hybrid Radix 4 CORDIC had the highest clock delay.Keywords: multi, CORDIC, FPGA, processor
Procedia PDF Downloads 4701482 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 2761481 Personalized Tissues and Organs Replacement – a Peek into the Future
Authors: Asaf Toker
Abstract:
Matricelf developed a technology that enables the production of autologous engineered tissue composed of matrix and cells derived from patients Omentum biopsy. The platform showed remarkable pre-clinical results for several medical conditions. The company recently licensed the technology that enabled scientist at Tel Aviv university that 3D printed a human heart from human cells and matrix for the first time in human history. The company plans to conduct its first human clinical trial for Acute Spinal Cord Injury (SCI) early in 2023.Keywords: tissue engineering, regenerative medicine, spinal Cord Injury, autologous implants, iPSC
Procedia PDF Downloads 1261480 Perception of Nursing Students’ Engagement With Emergency Remote Learning During COVID 19 Pandemic
Authors: Jansirani Natarajan, Mickael Antoinne Joseph
Abstract:
The COVID-19 pandemic has interrupted face-to-face education and forced universities into an emergency remote teaching curriculum over a short duration. This abrupt transition in the Spring 2020 semester left both faculty and students without proper preparation for continuing higher education in an online environment. Online learning took place in different formats, including fully synchronous, fully asynchronous, and blended in our university through the e-learning platform MOODLE. Studies have shown that students’ engagement, is a critical factor for optimal online teaching. Very few studies have assessed online engagement with ERT during the COVID-19 pandemic. Purpose: Therefore, this study, sought to understand how the sudden transition to emergency remote teaching impacted nursing students’ engagement with online courses in a Middle Eastern public university. Method: A cross-sectional descriptive research design was adopted in this study. Data were collected through a self-reported online survey using Dixon’s online students’ engagement questionnaire from a sample of 177 nursing students after the ERT learning semester. Results The maximum possible engagement score was 95, and the maximum scores in the domains of skills engagement, emotional engagement, participation engagement, and performance engagement were 30, 25, 30, and 10 respectively. Dixson (2010) noted that a mean item score of ≥3.5 (total score of ≥66.5) represents a highly engaged student. The majority of the participants were females (71.8%) and 84.2% were regular BSN students. Most of them (32.2%) were second-year students and 52% had a CGPA between 2 and 3. Most participants (56.5%) had low engagement scores with ERT learning during the COVID lockdown. Among the four engagement domains, 78% had low engagement scores for the participation domain. There was no significant association found between the engagement and the demographic characteristics of the participants. Conclusion The findings supported the importance of engaging students in all four categories skill, emotional, performance, and participation. Based on the results, training sessions were organized for faculty on various strategies for engaging nursing students in all domains by using the facilities available in the MOODLE (online e-learning platform). It added value as a dashboard of information regarding ERT for the administrators and nurse educators to introduce numerous active learning strategies to improve the quality of teaching and learning of nursing students in the University.Keywords: engagement, perception, emergency remote learning, COVID-19
Procedia PDF Downloads 631479 Soft Robotic System for Mechanical Stimulation of Scaffolds During Dynamic Cell Culture
Authors: Johanna Perdomo, Riki Lamont, Edmund Pickering, Naomi C. Paxton, Maria A. Woodruff
Abstract:
Background: Tissue Engineering (TE) has combined advanced materials, such as biomaterials, to create affordable scaffolds and dynamic systems to generate stimulation of seeded cells on these scaffolds, improving and maintaining the cellular growth process in a cell culture. However, Few TE skin products have been clinically translated, and more research is required to produce highly biomimetic skin substitutes that mimic the native elasticity of skin in a controlled manner. Therefore, this work will be focused on the fabrication of a novel mechanical system to enhance the TE treatment approaches for the reparation of damaged tissue skin. Aims: To archive this, a soft robotic device will be created to emulate different deformation of skin stress. The design of this soft robot will allow the attachment of scaffolds, which will then be mechanically actuated. This will provide a novel and highly adaptable platform for dynamic cell culture. Methods: Novel, low-cost soft robot is fabricated via 3D printed moulds and silicone. A low cost, electro-mechanical device was constructed to actuate the soft robot through the controlled combination of positive and negative air pressure to control the different state of movements. Mechanical tests were conducted to assess the performance and calibration of each electronic component. Similarly, pressure-displacement test was performed on scaffolds, which were attached to the soft robot, applying various mechanical loading regimes. Lastly, digital image correlation test was performed to obtain strain distributions over the soft robot’s surface. Results: The control system can control and stabilise positive pressure changes for long hours. Similarly, pressure-displacement test demonstrated that scaffolds with 5µm of diameter and wavy geometry can displace at 100%, applying a maximum pressure of 1.5 PSI. Lastly, during the inflation state, the displacement of silicone was measured using DIC method, and this showed a parameter of 4.78 mm and strain of 0.0652. Discussion And Conclusion: The developed soft robot system provides a novel and low-cost platform for the dynamic actuation of tissue scaffolds with a target towards dynamic cell culture.Keywords: soft robot, tissue engineering, mechanical stimulation, dynamic cell culture, bioreactor
Procedia PDF Downloads 961478 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 4781477 Integrating One Health Approach with National Policies to Improve Health Security post-COVID-19 in Vietnam
Authors: Yasser Sanad, Thu Trang Dao
Abstract:
Introduction: Implementing the One Health (OH) approach requires an integrated, interdisciplinary, and cross-sectoral methodology. OH is a key tool for developing and implementing programs and projects and includes developing ambitious policies that consider the common needs and benefits of human, animal, plant, and ecosystem health. OH helps humanity readjust its path to environmentally friendly and impartial sustainability. As co-leader of the Global Health Security Agenda’s Zoonotic Disease Action Package, Vietnam pioneered a strong OH approach to effectively address early waves of the COVID-19 outbreak in-country. Context and Aim: The repeated surges in COVID-19 in Vietnam challenged the capabilities of the national system and disclosed the gaps in multi-sectoral coordination and resilience. To address this, FHI 360 advocated for the standardization of the OH platform by government actors to increase the resiliency of the system during and post COVID-19. Methods: FHI 360 coordinated technical resources to develop and implement evidence-based OH policies, promoting high-level policy dialogue between the Ministries of Health, Agriculture, and the Environment, and policy research to inform developed policies and frameworks. Through discussions, an OH-building Partnership (OHP) was formed, linking climate change, the environment, and human and animal health. Findings: The OHP Framework created a favorable policy environment within and between sectors, as well as between governments and international health security partners. It also promoted strategic dialogue, resource mobilization, policy advocacy, and integration of international systems with National Steering Committees to ensure accountability and emphasize national ownership. Innovative contribution to policy, practice and/or research: OHP was an effective evidence-based research-to-policy platform linking to the National One Health Strategic Plan (2021-2025). Collectively they serve as a national framework for the implementation and monitoring of OH activities. Through the adoption of policies and plans, the risk of zoonotic pathogens, environmental agent spillover, and antimicrobial resistance can be minimized through strengthening multi-sectoral OH collaboration for health security.Keywords: one health, national policies, health security, COVID-19, Vietnam
Procedia PDF Downloads 1051476 The First Transcriptome Assembly of Marama Bean: An African Orphan Crop
Authors: Ethel E. Phiri, Lionel Hartzenberg, Percy Chimwamuromba, Emmanuel Nepolo, Jens Kossmann, James R. Lloyd
Abstract:
Orphan crops are underresearched and underutilized food plant species that have not been categorized as major food crops, but have the potential to be economically and agronomically significant. They have been documented to have the ability to tolerate extreme environmental conditions. However, limited research has been conducted to uncover their potential as food crop species. The New Partnership for Africa’s Development (NEPAD) has classified Marama bean, Tylosema esculentum, as an orphan crop. The plant is one of the 101 African orphan crops that must have their genomes sequenced, assembled, and annotated in the foreseeable future. Marama bean is a perennial leguminous plant that primarily grows in poor, arid soils in southern Africa. The plants produce large tubers that can weigh as much as 200kg. While the foliage provides fodder, the tuber is carbohydrate rich and is a staple food source for rural communities in Namibia. Also, the edible seeds are protein- and oil-rich. Marama Bean plants respond rapidly to increased temperatures and severe water scarcity without extreme consequences. Advances in molecular biology and biotechnology have made it possible to effectively transfer technologies between model- and major crops to orphan crops. In this research, the aim was to assemble the first transcriptomic analysis of Marama Bean RNA-sequence data. Many model plant species have had their genomes sequenced and their transcriptomes assembled. Therefore the availability of transcriptome data for a non-model crop plant species will allow for gene identification and comparisons between various species. The data has been sequenced using the Ilumina Hiseq 2500 sequencing platform. Data analysis is underway. In essence, this research will eventually evaluate the potential use of Marama Bean as a crop species to improve its value in agronomy. data for a non-model crop plant species will allow for gene identification and comparisons between various species. The data has been sequenced using the Ilumina Hiseq 2500 sequencing platform. Data analysis is underway. In essence, this researc will eventually evaluate the potential use of Marama bean as a crop species to improve its value in agronomy.Keywords: 101 African orphan crops, RNA-Seq, Tylosema esculentum, underutilised crop plants
Procedia PDF Downloads 3601475 A Generalized Model for Performance Analysis of Airborne Radar in Clutter Scenario
Authors: Vinod Kumar Jaysaval, Prateek Agarwal
Abstract:
Performance prediction of airborne radar is a challenging and cumbersome task in clutter scenario for different types of targets. A generalized model requires to predict the performance of Radar for air targets as well as ground moving targets. In this paper, we propose a generalized model to bring out the performance of airborne radar for different Pulsed Repetition Frequency (PRF) as well as different type of targets. The model provides a platform to bring out different subsystem parameters for different applications and performance requirements under different types of clutter terrain.Keywords: airborne radar, blind zone, clutter, probability of detection
Procedia PDF Downloads 4701474 Net Neutrality and Asymmetric Platform Competition
Authors: Romain Lestage, Marc Bourreau
Abstract:
In this paper we analyze the interplay between access to the last-mile network and net neutrality in the market for Internet access. We consider two Internet Service Providers (ISPs), which act as platforms between Internet users and Content Providers (CPs). One of the ISPs is vertically integrated and provides access to its last-mile network to the other (non-integrated) ISP. We show that a lower access price increases the integrated ISP's incentives to charge CPs positive termination fees (i.e., to deviate from net neutrality), and decreases the non-integrated ISP's incentives to charge positive termination fees.Keywords: net neutrality, access regulation, internet access, two-sided markets
Procedia PDF Downloads 3751473 FPGA Implementation of the BB84 Protocol
Authors: Jaouadi Ikram, Machhout Mohsen
Abstract:
The development of a quantum key distribution (QKD) system on a field-programmable gate array (FPGA) platform is the subject of this paper. A quantum cryptographic protocol is designed based on the properties of quantum information and the characteristics of FPGAs. The proposed protocol performs key extraction, reconciliation, error correction, and privacy amplification tasks to generate a perfectly secret final key. We modeled the presence of the spy in our system with a strategy to reveal some of the exchanged information without being noticed. Using an FPGA card with a 100 MHz clock frequency, we have demonstrated the evolution of the error rate as well as the amounts of mutual information (between the two interlocutors and that of the spy) passing from one step to another in the key generation process.Keywords: QKD, BB84, protocol, cryptography, FPGA, key, security, communication
Procedia PDF Downloads 1831472 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing
Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas
Abstract:
This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language
Procedia PDF Downloads 4041471 Becoming Academic in the Entrepreneurial University: Researcher Identities and Research Impact Development
Authors: Victoria G. Mountford-Brown
Abstract:
The concept of the Entrepreneurial University and emphasis on higher education institutions as both hives of innovation and as producers of future innovators accord special significance to the role of academic researchers in future economic and social prosperity. Researcher development in the UK has embedded an emphasis or ‘enterprise lens’ on developing the capabilities of researchers to support a stable economy whilst providing solutions to societal challenges. However, the notion of the ‘entrepreneurial university’ and what that represents to many academics is met with tension and (dis)engagement in the premises of the ‘knowledge economy’ or ‘academic capitalism.’ Set in a landscape of UK higher education wherein the increasing emphasis on research impact, coupled with increasing competition for scarce funding, has created a ‘climate of performativity’. This research seeks to better understand the ways in which academic identities are (re)constructed in the everyday experiences of doctoral (PGR) and early career researchers (ECRs) as they navigate what is referred to by some as the ‘academic hunger games’. These daily pressures and high expectations of success are part of the identity work PGRs/ECRs undergo. This is often fraught with tension and struggles to adapt to the research environment suggesting a reason for imposter phenomenon to be rife in academia – particularly (but not exclusively) in the early stages of development. This pilot study involves qualitative semi-structured exploratory interviews with a mixed gendered sample of participants from a variety of subject disciplines who have taken part in an intensive 3-day innovation and enterprise program for PGR and ECRs premised on developing personal and research impact. The research seeks to better understand the processes of identity formation of becoming academic and offers a commentary on the notions of ‘imposter phenomenon’ and the exchange and development of resources or capital needed to ‘play the game’ in academia in the context of the ‘entrepreneurial university’. It explores ongoing (re)constructions of what it means to be an academic and the different ways in which social identities may embody and challenge the development of entrepreneurial academic identities. As such, it aims to contribute to our understanding of the innovation ecosystem of academia and the prosperity of academic researchers.Keywords: entreprenruial development, higher education, identities, researcher development
Procedia PDF Downloads 961470 Model Observability – A Monitoring Solution for Machine Learning Models
Authors: Amreth Chandrasehar
Abstract:
Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.Keywords: model observability, monitoring, drift detection, ML observability platform
Procedia PDF Downloads 1121469 Multi-Criteria Decision Making Tool for Assessment of Biorefinery Strategies
Authors: Marzouk Benali, Jawad Jeaidi, Behrang Mansoornejad, Olumoye Ajao, Banafsheh Gilani, Nima Ghavidel Mehr
Abstract:
Canadian forest industry is seeking to identify and implement transformational strategies for enhanced financial performance through the emerging bioeconomy or more specifically through the concept of the biorefinery. For example, processing forest residues or surplus of biomass available on the mill sites for the production of biofuels, biochemicals and/or biomaterials is one of the attractive strategies along with traditional wood and paper products and cogenerated energy. There are many possible process-product biorefinery pathways, each associated with specific product portfolios with different levels of risk. Thus, it is not obvious which unique strategy forest industry should select and implement. Therefore, there is a need for analytical and design tools that enable evaluating biorefinery strategies based on a set of criteria considering a perspective of sustainability over the short and long terms, while selecting the existing core products as well as selecting the new product portfolio. In addition, it is critical to assess the manufacturing flexibility to internalize the risk from market price volatility of each targeted bio-based product in the product portfolio, prior to invest heavily in any biorefinery strategy. The proposed paper will focus on introducing a systematic methodology for designing integrated biorefineries using process systems engineering tools as well as a multi-criteria decision making framework to put forward the most effective biorefinery strategies that fulfill the needs of the forest industry. Topics to be covered will include market analysis, techno-economic assessment, cost accounting, energy integration analysis, life cycle assessment and supply chain analysis. This will be followed by describing the vision as well as the key features and functionalities of the I-BIOREF software platform, developed by CanmetENERGY of Natural Resources Canada. Two industrial case studies will be presented to support the robustness and flexibility of I-BIOREF software platform: i) An integrated Canadian Kraft pulp mill with lignin recovery process (namely, LignoBoost™); ii) A standalone biorefinery based on ethanol-organosolv process.Keywords: biorefinery strategies, bioproducts, co-production, multi-criteria decision making, tool
Procedia PDF Downloads 2321468 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy
Authors: Paul R Armstrong
Abstract:
Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.Keywords: NIR, haploids, maize, sorting
Procedia PDF Downloads 3021467 An Analysis of Emmanuel Macron's Campaign Discourse
Authors: Robin Turner
Abstract:
In the context of the strengthening conservative movements such as “Brexit” and the election of US President Donald Trump, the global political stage was shaken up by the election of Emmanuel Macron to the French presidency, defeating the far-right candidate Marine Le Pen. The election itself was a first for the Fifth Republic in which neither final candidate was from the traditional two major political parties: the left Parti Socialiste (PS) and the right Les Républicains (LR). Macron, who served as the Minister of Finance under his predecessor, founded the centrist liberal political party En Marche! in April 2016 before resigning from his post in August to launch his bid for the presidency. Between the time of the party’s creation to the first round of elections a year later, Emmanuel Macron and En Marche! had garnered enough support to make it to the run-off election, finishing far ahead of many seasoned national political figures. Now months into his presidency, the youngest President of the Republic shows no sign of losing fuel anytime soon. His unprecedented success raises a lot of questions with respect to international relations, economics, and the evolving relationship between the French government and its citizens. The effectiveness of Macron’s campaign, of course, relies on many factors, one of which is his manner of communicating his platform to French voters. Using data from oral discourse and primary material from Macron and En Marche! in sources such as party publications and Twitter, the study categorizes linguistic instruments – address, lexicon, tone, register, and syntax – to identify prevailing patterns of speech and communication. The linguistic analysis in this project is two-fold. In addition to these findings’ stand-alone value, these discourse patterns are contextualized by comparable discourse of other 2017 presidential candidates with high emphasis on that of Marine Le Pen. Secondly, to provide an alternative approach, the study contextualizes Macron’s discourse using those of two immediate predecessors representing the traditional stronghold political parties, François Hollande (PS) and Nicolas Sarkozy (LR). These comparative methods produce an analysis that gives insight to not only a contributing factor to Macron’s successful 2017 campaign but also provides insight into how Macron’s platform presents itself differently to previous presidential platforms. Furthermore, this study extends analysis to supply data that contributes to a wider analysis of the defeat of “traditional” French political parties by the “start-up” movement En Marche!.Keywords: Emmanuel Macron, French, discourse analysis, political discourse
Procedia PDF Downloads 2611466 Cooperation of Unmanned Vehicles for Accomplishing Missions
Authors: Ahmet Ozcan, Onder Alparslan, Anil Sezgin, Omer Cetin
Abstract:
The use of unmanned systems for different purposes has become very popular over the past decade. Expectations from these systems have also shown an incredible increase in this parallel. But meeting the demands of the tasks are often not possible with the usage of a single unmanned vehicle in a mission, so it is necessary to use multiple autonomous vehicles with different abilities together in coordination. Therefore the usage of the same type of vehicles together as a swarm is helped especially to satisfy the time constraints of the missions effectively. In other words, it allows sharing the workload by the various numbers of homogenous platforms together. Besides, it is possible to say there are many kinds of problems that require the usage of the different capabilities of the heterogeneous platforms together cooperatively to achieve successful results. In this case, cooperative working brings additional problems beyond the homogeneous clusters. In the scenario presented as an example problem, it is expected that an autonomous ground vehicle, which is lack of its position information, manage to perform point-to-point navigation without losing its way in a previously unknown labyrinth. Furthermore, the ground vehicle is equipped with very limited sensors such as ultrasonic sensors that can detect obstacles. It is very hard to plan or complete the mission for the ground vehicle by self without lost its way in the unknown labyrinth. Thus, in order to assist the ground vehicle, the autonomous air drone is also used to solve the problem cooperatively. The autonomous drone also has limited sensors like downward looking camera and IMU, and it also lacks computing its global position. In this context, it is aimed to solve the problem effectively without taking additional support or input from the outside, just benefiting capabilities of two autonomous vehicles. To manage the point-to-point navigation in a previously unknown labyrinth, the platforms have to work together coordinated. In this paper, cooperative work of heterogeneous unmanned systems is handled in an applied sample scenario, and it is mentioned that how to work together with an autonomous ground vehicle and the autonomous flying platform together in a harmony to take advantage of different platform-specific capabilities. The difficulties of using heterogeneous multiple autonomous platforms in a mission are put forward, and the successful solutions are defined and implemented against the problems like spatially distributed tasks planning, simultaneous coordinated motion, effective communication, and sensor fusion.Keywords: unmanned systems, heterogeneous autonomous vehicles, coordination, task planning
Procedia PDF Downloads 1281465 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot
Procedia PDF Downloads 2211464 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 2941463 Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers
Authors: Fabiano Araujo Soares, Sauro Emerick Salomoni, Joao Paulo Lima da Silva, Igor Luiz Moura, Adson Ferreira da Rocha
Abstract:
This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period.Keywords: S-EMG estimators, electromyography, surface electromyography, ARV, RMS, MDF, MNF, CV
Procedia PDF Downloads 5581462 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms
Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli
Abstract:
Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning
Procedia PDF Downloads 731461 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 68