Search results for: Robust algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4726

Search results for: Robust algorithm

706 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 62
705 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 55
704 Case Study: Optimization of Contractor’s Financing through Allocation of Subcontractors

Authors: Helen S. Ghali, Engy Serag, A. Samer Ezeldin

Abstract:

In many countries, the construction industry relies heavily on outsourcing models in executing their projects and expanding their businesses to fit in the diverse market. Such extensive integration of subcontractors is becoming an influential factor in contractor’s cash flow management. Accordingly, subcontractors’ financial terms are important phenomena and pivotal components for the well-being of the contractor’s cash flow. The aim of this research is to study the contractor’s cash flow with respect to the owner and subcontractor’s payment management plans, considering variable advance payment, payment frequency, and lag and retention policies. The model is developed to provide contractors with a decision support tool that can assist in selecting the optimum subcontracting plan to minimize the contractor’s financing limits and optimize the profit values. The model is built using Microsoft Excel VBA coding, and the genetic algorithm is utilized as the optimization tool. Three objective functions are investigated, which are minimizing the highest negative overdraft value, minimizing the net present worth of overdraft, and maximizing the project net profit. The model is validated on a full-scale project which includes both self-performed and subcontracted work packages. The results show potential outputs in optimizing the contractor’s negative cash flow values and, in the meantime, assisting contractors in selecting suitable subcontractors to achieve the objective function.

Keywords: cash flow optimization, payment plan, procurement management, subcontracting plan

Procedia PDF Downloads 106
703 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 75
702 Enhanced Model for Risk-Based Assessment of Employee Security with Bring Your Own Device Using Cyber Hygiene

Authors: Saidu I. R., Shittu S. S.

Abstract:

As the trend of personal devices accessing corporate data continues to rise through Bring Your Own Device (BYOD) practices, organizations recognize the potential cost reduction and productivity gains. However, the associated security risks pose a significant threat to these benefits. Often, organizations adopt BYOD environments without fully considering the vulnerabilities introduced by human factors in this context. This study presents an enhanced assessment model that evaluates the security posture of employees in BYOD environments using cyber hygiene principles. The framework assesses users' adherence to best practices and guidelines for maintaining a secure computing environment, employing scales and the Euclidean distance formula. By utilizing this algorithm, the study measures the distance between users' security practices and the organization's optimal security policies. To facilitate user evaluation, a simple and intuitive interface for automated assessment is developed. To validate the effectiveness of the proposed framework, design science research methods are employed, and empirical assessments are conducted using five artifacts to analyze user suitability in BYOD environments. By addressing the human factor vulnerabilities through the assessment of cyber hygiene practices, this study aims to enhance the overall security of BYOD environments and enable organizations to leverage the advantages of this evolving trend while mitigating potential risks.

Keywords: security, BYOD, vulnerability, risk, cyber hygiene

Procedia PDF Downloads 56
701 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking

Authors: Trevor Toy, Josef Langerman

Abstract:

Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.

Keywords: big data markets, open banking, blockchain, personal data management

Procedia PDF Downloads 60
700 Frontal Oscillatory Activity and Phase–Amplitude Coupling during Chan Meditation

Authors: Arthur C. Tsai, Chii-Shyang Kuo, Vincent S. C. Chien, Michelle Liou, Philip E. Cheng

Abstract:

Meditation enhances mental abilities and it is an antidote to anxiety. However, very little is known about brain mechanisms and cortico-subcortical interactions underlying meditation-induced anxiety relief. In this study, the changes of phase-amplitude coupling (PAC) in which the amplitude of the beta frequency band were modulated in phase with delta rhythm were investigated after eight-week of meditation training. The study hypothesized that through a concentrate but relaxed mental training the delta-beta coupling in the frontal regions is attenuated. The delta-beta coupling analysis was applied to within and between maximally-independent component sources returned from the extended infomax independent components analysis (ICA) algorithm on the continuous EEG data during mediation. A unique meditative concentration task through relaxing body and mind was used with a constant level of moderate mental effort, so as to approach an ‘emptiness’ meditative state. A pre-test/post-test control group design was used in this study. To evaluate cross-frequency phase-amplitude coupling of component sources, the modulation index (MI) with statistics to calculate circular phase statistics were estimated. Our findings reveal that a significant delta-beta decoupling was observed in a set of frontal regions bilaterally. In addition, beta frequency band of prefrontal component were amplitude modulated in phase with the delta rhythm of medial frontal component.

Keywords: phase-amplitude coupling, ICA, meditation, EEG

Procedia PDF Downloads 408
699 Isosorbide Bis-Methyl Carbonate: Opportunities for an Industrial Model Based on Biomass

Authors: Olga Gomez De Miranda, Jose R. Ochoa-Gomez, Stefaan De Wildeman, Luciano Monsegue, Soraya Prieto, Leire Lorenzo, Cristina Dineiro

Abstract:

The chemical industry is facing a new revolution. As long as processes based on the exploitation of fossil resources emerged with force in the XIX century, Society currently demands a new radical change that will lead to the complete and irreversible implementation of a circular sustainable economic model. The implementation of biorefineries will be essential for this. There, renewable raw materials as sugars and other biomass resources are exploited for the development of new materials that will partially replace their petroleum-derived homologs in a safer, and environmentally more benign approach. Isosorbide, (1,4:3,6-dianhydro-d-glucidol) is a primary bio-based derivative obtained from the plant (poly) saccharides and a very interesting example of a useful chemical produced in biorefineries. It can, in turn, be converted to other secondary monomers as isosorbide bis-methyl carbonate (IBMC), whose main field of application can be as a key biodegradable intermediary substitute of bisphenol-A in the manufacture of polycarbonates, or as an alternative to the toxic isocyanates in the synthesis of new polyurethanes (non-isocyanate polyurethanes) both with a huge application market. New products will present advantageous mechanical or optical properties, as well as improved behavior in non-toxicity and biodegradability aspects in comparison to their petro-derived alternatives. A robust production process of IBMC, a biomass-derived chemical, is here presented. It can be used with different raw material qualities using dimethyl carbonate (DMC) as both co-reactant and solvent. It consists of the transesterification of isosorbide with DMC under soft operational conditions, using different basic catalysts, always active with the isosorbide characteristics and purity. Appropriate isolation processes have been also developed to obtain crude IBMC yields higher than 90%, with oligomers production lower than 10%, independently of the quality of the isosorbide considered. All of them are suitable to be used in polycondensation reactions for polymers obtaining. If higher qualities of IBMC are needed, a purification treatment based on nanofiltration membranes has been also developed. The IBMC reaction-isolation conditions established in the laboratory have been successfully modeled using appropriate software programs and moved to a pilot-scale (production of 100 kg of IBMC). It has been demonstrated that a highly efficient IBMC production process able to be up-scaled under suitable market conditions has been obtained. Operational conditions involved the production of IBMC involve soft temperature and energy needs, no additional solvents, and high operational efficiency. All of them are according to green manufacturing rules.

Keywords: biomass, catalyst, isosorbide bis-methyl carbonate, polycarbonate, polyurethane, transesterification

Procedia PDF Downloads 116
698 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 169
697 Non-Invasive Pre-Implantation Genetic Assessment Using NGS in IVF Clinical Routine

Authors: Katalin Gombos, Bence Gálik, Krisztina Ildikó Kalács, Krisztina Gödöny, Ákos Várnagy, József Bódis, Attila Gyenesei, Gábor L. Kovács

Abstract:

Although non-invasive pre-implantation genetic testing for aneuploidy (NIPGT-A) is potentially appropriate to assess chromosomal ploidy of the embryo, practical application of it in a routine IVF center has not been started in the absence of a recommendation. We developed a comprehensive workflow for a clinically applicable strategy for NIPGT-A based on next-generation sequencing (NGS) technology. We performed MALBAC whole genome amplification and NGS on spent blastocyst culture media of Day 3 embryos fertilized with intra-cytoplasmic sperm injection (ICSI). Spent embryonic culture media of morphologically good quality score embryos were enrolled in further analysis with the blank culture media as background control. Chromosomal abnormalities were identified by an optimized bioinformatics pipeline applying a copy number variation (CNV) detecting algorithm. We demonstrate a comprehensive workflow covering both wet- and dry-lab procedures supporting a clinically applicable strategy for NIPGT-A. It can be carried out within 48 h which is critical for the same-cycle blastocyst transfer, but also suitable for “freeze all” and “elective frozen embryo” strategies. The described integrated approach of non-invasive evaluation of embryonic DNA content of the culture media can potentially supplement existing pre-implantation genetic screening methods.

Keywords: next generation sequencing, in vitro fertilization, embryo assessment, non-invasive pre-implantation genetic testing

Procedia PDF Downloads 141
696 Singular Perturbed Vector Field Method Applied to the Problem of Thermal Explosion of Polydisperse Fuel Spray

Authors: Ophir Nave

Abstract:

In our research, we present the concept of singularly perturbed vector field (SPVF) method, and its application to thermal explosion of diesel spray combustion. Given a system of governing equations, which consist of hidden Multi-scale variables, the SPVF method transfer and decompose such system to fast and slow singularly perturbed subsystems (SPS). The SPVF method enables us to understand the complex system, and simplify the calculations. Later powerful analytical, numerical and asymptotic methods (e.g method of integral (invariant) manifold (MIM), the homotopy analysis method (HAM) etc.) can be applied to each subsystem. We compare the results obtained by the methods of integral invariant manifold and SPVF apply to spray droplets combustion model. The research deals with the development of an innovative method for extracting fast and slow variables in physical mathematical models. The method that we developed called singular perturbed vector field. This method based on a numerical algorithm applied to global quasi linearization applied to given physical model. The SPVF method applied successfully to combustion processes. Our results were compared to experimentally results. The SPVF is a general numerical and asymptotical method that reveals the hierarchy (multi-scale system) of a given system.

Keywords: polydisperse spray, model reduction, asymptotic analysis, multi-scale systems

Procedia PDF Downloads 206
695 An Investigation of the Relationship Between Privacy Crisis, Public Discourse on Privacy, and Key Performance Indicators at Facebook (2004–2021)

Authors: Prajwal Eachempati, Laurent Muzellec, Ashish Kumar Jha

Abstract:

We use Facebook as a case study to investigate the complex relationship between the firm’s public discourse (and actions) surrounding data privacy and the performance of a business model based on monetizing user’s data. We do so by looking at the evolution of public discourse over time (2004–2021) and relate topics to revenue and stock market evolution Drawing from archival sources like Zuckerberg We use LDA topic modelling algorithm to reveal 19 topics regrouped in 6 major themes. We first show how, by using persuasive and convincing language that promises better protection of consumer data usage, but also emphasizes greater user control over their own data, the privacy issue is being reframed as one of greater user control and responsibility. Second, we aim to understand and put a value on the extent to which privacy disclosures have a potential impact on the financial performance of social media firms. There we found significant relationship between the topics pertaining to privacy and social media/technology, sentiment score and stock market prices. Revenue is found to be impacted by topics pertaining to politics and new product and service innovations while number of active users is not impacted by the topics unless moderated by external control variables like Return on Assets and Brand Equity.

Keywords: public discourses, data protection, social media, privacy, topic modeling, business models, financial performance

Procedia PDF Downloads 77
694 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 290
693 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 258
692 Structures and Analytical Crucibles in Nigerian Indigenous Art Music

Authors: Albert Oluwole Uzodimma Authority

Abstract:

Nigeria is a diverse nation with a rich cultural heritage that has produced numerous art musicians and a vast range of art songs. The compositional styles, tonal rhythm, text rhythm, word painting, and text-tone relationship vary extensively from one dialect to another, indicating the need for standardized tools for the structural and analytical deconstruction of Nigerian indigenous art music. The purpose of this research is to examine the structures of Nigerian indigenous art music and outline some crucibles for analyzing it, by investigating how dialectical inflection influences the choice of text tone, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. The research used a structured questionnaire to collect data from 50 musicologists, out of which 41 responded. The study's focus was on the works of two prominent twentieth-century composers, Stephen Olusoji, and Nwamara Alvan-Ikoku, titled "Oyigiyigi" and "O Chineke, Inozikwa omee," respectively. The data collected was presented in percentages using pie charts and tables. The study shows that in Nigerian Indigenous music, several aspects are to be considered for proper analysis, such as linguistic sensitivity, dialectical inflection influences text-tone relationship, text rhythm and tonal rhythm, which help to convey the proper meanings of messages in songs. It also highlights the lack of standardized rubrics for analysis, which necessitated the proposal of robust criteria for analyzing African music, known as Neo-Eclectic-Crucibles. Hinging on eclectic approach, this research makes significant contributions to music scholarship by addressing the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It provides a template for further studies leading to standardized rubrics for analyzing African music. This research collected data through a structured questionnaire and analyzed it using pie charts and tables to present the findings accurately. The analysis focused on the respondents' perspectives on the research objectives and structural analysis of two indigenous music compositions by Olusoji and Nwamara. This research answers the questions on the structures and analytical crucibles used in Nigerian indigenous art music, how dialectical inflection influences text-tone relationship, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. This paper demonstrates the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It highlights several aspects that are crucial to analyzing Nigerian indigenous music and proposes the Neo-Eclectic-Crucibles criteria for analyzing African music. The contribution of this research to music scholarship is significant, providing a template for further studies and research in the field.

Keywords: art-music, crucibles, dialectical inflections, indigenous, text-tone, tonal rhythm, word-painting

Procedia PDF Downloads 75
691 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection

Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan

Abstract:

Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.

Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori

Procedia PDF Downloads 287
690 Pakistan Nuclear Security: Threats from Non-State Actors

Authors: Jennifer Wright

Abstract:

The recent rise of powerful terrorist groups such as ISIS and Al-Qaeda brings up concerns about nuclear terrorism as well as a focus on nuclear security, specifically the physical security of nuclear weapons and fissile material storage sites in countries where powerful nonstate actors are present. Particularly because these non-state actors, who lack their own sovereign territory, cannot be ‘deterred’ in the traditional sense. In light of the current threat environment, it’s necessary to now rethink these strategies in the 21st century – a multipolar world with the presence of powerful non-state actors. As a country in the spotlight for its low ranking on the Nuclear Threat Initiative’s (NTI) Nuclear Security Index, Pakistan is a relevant example to explore the question of whether the presence of non-state actors poses a real risk to nuclear security today. It’s necessary to take a look at their nuclear security policies to determine if they’re robust enough to deal with political instability and violence in the country. After carrying out interviews with experts in May 2017 in Islamabad on nuclear security and nuclear terrorism, this paper aims to highlight findings by providing a Pakistan-centric view on the subject and give experts there a chance to counter criticism. Western media would have us fearful of nuclear security mechanisms in Pakistan after reports that areas such as cybersecurity and accounting and control of materials are weak, as well as sensitive nuclear material being transported in unmarked, unguarded vehicles. Also reported are cases where terrorist groups carried out targeted attacks against Pakistani military bases or secure sites where nuclear material is stored. One specific question asked of each interviewee in Islamabad was Do you feel the threat of nuclear terrorism calls into question the reliance on deterrence? Their responses will be elaborated on in the longer paper, but overall they demonstrate views that deterrence still serves a purpose for state-to-state security strategy, but not for a state in countering nonstate threats. If nuclear security is lax enough for these non-state actors to get their hands on either an intact nuclear weapon or enough military-grade fissile material to build a nuclear weapon, then what would stop them from launching a nuclear attack? As deterrence is a state-centric strategy, it doesn’t work to deter non-state actors from carrying out an attack on another state, as they lack their own territory, and as such, are not fearful of a reprisal attack. Deterrence will need to be addressed, and its relevance analyzed to determine its utility in the current security environment. The aim of this research is to demonstrate the real risk of nuclear terrorism by pointing to weaknesses in global nuclear security, particularly in Pakistan. The research also aims to provoke thought on the weaknesses of deterrence as a whole. Original thinking is needed as we attempt to adequately respond to the 21st century’s current threat environment.

Keywords: deterrence, non-proliferation, nuclear security, nuclear terrorism

Procedia PDF Downloads 204
689 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 96
688 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 446
687 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning

Authors: Ioanna Taouki, Marie Lallier, David Soto

Abstract:

Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.

Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition

Procedia PDF Downloads 133
686 Designing and Prototyping Permanent Magnet Generators for Wind Energy

Authors: T. Asefi, J. Faiz, M. A. Khan

Abstract:

This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.

Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms

Procedia PDF Downloads 131
685 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 109
684 Investigating Non-suicidal Self-Injury Discussions on Twitter

Authors: Muhammad Abubakar Alhassan, Diane Pennington

Abstract:

Social networking sites have become a space for people to discuss public health issues such as non-suicidal self-injury (NSSI). There are thousands of tweets containing self-harm and self-injury hashtags on Twitter. It is difficult to distinguish between different users who participate in self-injury discussions on Twitter and how their opinions change over time. Also, it is challenging to understand the topics surrounding NSSI discussions on Twitter. We retrieved tweets using #selfham and #selfinjury hashtags and investigated those from the United kingdom. We applied inductive coding and grouped tweeters into different categories. This study used the Latent Dirichlet Allocation (LDA) algorithm to infer the optimum number of topics that describes our corpus. Our findings revealed that many of those participating in NSSI discussions are non-professional users as opposed to medical experts and academics. Support organisations, medical teams, and academics were campaigning positively on rais-ing self-injury awareness and recovery. Using LDAvis visualisation technique, we selected the top 20 most relevant terms from each topic and interpreted the topics as; children and youth well-being, self-harm misjudgement, mental health awareness, school and mental health support and, suicide and mental-health issues. More than 50% of these topics were discussed in England compared to Scotland, Wales, Ireland and Northern Ireland. Our findings highlight the advantages of using the Twitter social network in tackling the problem of self-injury through awareness. There is a need to study the potential risks associated with the use of social networks among self-injurers.

Keywords: self-harm, non-suicidal self-injury, Twitter, social networks

Procedia PDF Downloads 113
683 Reduction of False Positives in Head-Shoulder Detection Based on Multi-Part Color Segmentation

Authors: Lae-Jeong Park

Abstract:

The paper presents a method that utilizes figure-ground color segmentation to extract effective global feature in terms of false positive reduction in the head-shoulder detection. Conventional detectors that rely on local features such as HOG due to real-time operation suffer from false positives. Color cue in an input image provides salient information on a global characteristic which is necessary to alleviate the false positives of the local feature based detectors. An effective approach that uses figure-ground color segmentation has been presented in an effort to reduce the false positives in object detection. In this paper, an extended version of the approach is presented that adopts separate multipart foregrounds instead of a single prior foreground and performs the figure-ground color segmentation with each of the foregrounds. The multipart foregrounds include the parts of the head-shoulder shape and additional auxiliary foregrounds being optimized by a search algorithm. A classifier is constructed with the feature that consists of a set of the multiple resulting segmentations. Experimental results show that the presented method can discriminate more false positive than the single prior shape-based classifier as well as detectors with the local features. The improvement is possible because the presented approach can reduce the false positives that have the same colors in the head and shoulder foregrounds.

Keywords: pedestrian detection, color segmentation, false positive, feature extraction

Procedia PDF Downloads 261
682 Comparative Performance Analysis for Selected Behavioral Learning Systems versus Ant Colony System Performance: Neural Network Approach

Authors: Hassan M. H. Mustafa

Abstract:

This piece of research addresses an interesting comparative analytical study. Which considers two concepts of diverse algorithmic computational intelligence approaches related tightly with Neural and Non-Neural Systems. The first algorithmic intelligent approach concerned with observed obtained practical results after three neural animal systems’ activities. Namely, they are Pavlov’s, and Thorndike’s experimental work. Besides a mouse’s trial during its movement inside figure of eight (8) maze, to reach an optimal solution for reconstruction problem. Conversely, second algorithmic intelligent approach originated from observed activities’ results for Non-Neural Ant Colony System (ACS). These results obtained after reaching an optimal solution while solving Traveling Sales-man Problem (TSP). Interestingly, the effect of increasing number of agents (either neurons or ants) on learning performance shown to be similar for both introduced systems. Finally, performance of both intelligent learning paradigms shown to be in agreement with learning convergence process searching for least mean square error LMS algorithm. While its application for training some Artificial Neural Network (ANN) models. Accordingly, adopted ANN modeling is a relevant and realistic tool to investigate observations and analyze performance for both selected computational intelligence (biological behavioral learning) systems.

Keywords: artificial neural network modeling, animal learning, ant colony system, traveling salesman problem, computational biology

Procedia PDF Downloads 452
681 Optimal Wind Based DG Placement Considering Monthly Changes Modeling in Wind Speed

Authors: Belal Mohamadi Kalesar, Raouf Hasanpour

Abstract:

Proper placement of Distributed Generation (DG) units such as wind turbine generators in distribution system are still very challenging issue for obtaining their maximum potential benefits because inappropriate placement may increase the system losses. This paper proposes Particle Swarm Optimization (PSO) technique for optimal placement of wind based DG (WDG) in the primary distribution system to reduce energy losses and voltage profile improvement with four different wind levels modeling in year duration. Also, wind turbine is modeled as a DFIG that will be operated at unity power factor and only one wind turbine tower will be considered to install at each bus of network. Finally, proposed method will be implemented on widely used 69 bus power distribution system in MATLAB software environment under four scenario (without, one, two and three WDG units) and for capability test of implemented program it is supposed that all buses of standard system can be candidate for WDG installing (large search space), though this program can consider predetermined number of candidate location in WDG placement to model financial limitation of project. Obtained results illustrate that wind speed increasing in some months will increase output power generated but this can increase / decrease power loss in some wind level, also results show that it is required about 3MW WDG capacity to install in different buses but when this is distributed in overall network (more number of WDG) it can cause better solution from point of view of power loss and voltage profile.

Keywords: wind turbine, DG placement, wind levels effect, PSO algorithm

Procedia PDF Downloads 438
680 Impact of Climatic Hazards on the Jamuna River Fisheries and Coping and Adaptation Strategies

Authors: Farah Islam, Md. Monirul Islam, Mosammat Salma Akter, Goutam Kumar Kundu

Abstract:

The continuous variability of climate and the risk associated with it have a significant impact on the fisheries leading to a global concern for about half a billion fishery-based livelihoods. Though in the context of Bangladesh mounting evidence on the impacts of climate change on fishery-based livelihoods or their socioeconomic conditions are present, the country’s inland fisheries sector remains in a negligible corner as compared to the coastal areas which are spotted on the highlight due to its higher vulnerability to climatic hazards. The available research on inland fisheries, particularly river fisheries, has focussed mainly on fish production, pollution, fishing gear, fish biodiversity and livelihoods of the fishers. This study assesses the impacts of climate variability and changes on the Jamuna (a transboundary river called Brahmaputra in India) River fishing communities and their coping and adaptation strategies. This study has used primary data collected from Kalitola Ghat and Debdanga fishing communities of the Jamuna River during May, August and December 2015 using semi-structured interviews, oral history interviews, key informant interviews, focus group discussions and impact matrix as well as secondary data. This study has found that both communities are exposed to storms, floods and land erosions which impact on fishery-based livelihood assets, strategies, and outcomes. The impact matrix shows that human and physical capitals are more affected by climate hazards which in turn affect financial capital. Both communities have been responding to these exposures through multiple coping and adaptation strategies. The coping strategies include making dam with soil, putting jute sac on the yard, taking shelter on boat or embankment, making raised platform or ‘Kheua’ and involving with temporary jobs. While, adaptation strategies include permanent migration, change of livelihood activities and strategies, changing fishing practices and making robust houses. The study shows that migration is the most common adaptation strategy for the fishers which resulted in mostly positive outcomes for the migrants. However, this migration has impacted negatively on the livelihoods of existing fishers in the communities. In sum, the Jamuna river fishing communities have been impacted by several climatic hazards and they have traditionally coped with or adapted to the impacts which are not sufficient to maintain sustainable livelihoods and fisheries. In coming decades, this situation may become worse as predicted by latest scientific research and an enhanced level of response would be needed.

Keywords: climatic hazards, impacts and adaptation, fisherfolk, the Jamuna River

Procedia PDF Downloads 293
679 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 430
678 Realization of Hybrid Beams Inertial Amplifier

Authors: Somya Ranjan Patro, Abhigna Bhatt, Arnab Banerjee

Abstract:

Inertial amplifier has recently gained increasing attention as a new mechanism for vibration control of structures. Currently, theoretical investigations are undertaken by researchers to reveal its fundamentals and to understand its underline principles in altering the structural response of structures against dynamic loadings. This paper investigates experimental and analytical studies on the dynamic characteristics of hybrid beam inertial amplifier (HBIA). The analytical formulation of the HBIA has been derived by implementing the spectral element method and rigid body dynamics. This formulation gives the relation between dynamic force and the response of the structure in the frequency domain. Further, for validation of the proposed HBIA, the experiments have been performed. The experimental setup consists of a 3D printed HBIA of polylactic acid (PLA) material screwed at the base plate of the shaker system. Two numbers of accelerometers are used to study the response, one at the base plate of the shaker second one placed at the top of the inertial amplifier. A force transducer is also placed in between the base plate and the inertial amplifier to calculate the total amount of load transferred from the base plate to the inertial amplifier. The obtained time domain response from the accelerometers have been converted into the frequency domain using the Fast Fourier Transform (FFT) algorithm. The experimental transmittance values are successfully validated with the analytical results, providing us essential confidence in our proposed methodology.

Keywords: inertial amplifier, fast fourier transform, natural frequencies, polylactic acid, transmittance, vibration absorbers

Procedia PDF Downloads 83
677 Indian Business-Papers in Industrial Revolution 4.0: A Paradigm Shift

Authors: Disha Batra

Abstract:

The Industrial Revolution 4.0 is quite different, and a paradigm shift is underway in the media industry. With the advent of automated journalism and social media platforms, newspaper organizations have changed the way news was gathered and reported. The emergence of the fourth industrial revolution in the early 21st century has made the newspapers to adapt the changing technologies to remain relevant. This paper investigates the content of Indian business-papers in the era of the fourth industrial revolution and how these organizations have emerged in the time of convergence. The study is the content analyses of the top three Indian business dailies as per IRS (Indian Readership Survey) 2017 over a decade. The parametric analysis of the different parameters (source of information, use of illustrations, advertisements, layout, and framing, etc.) have been done in order to come across with the distinct adaptations and modifications by these dailies. The paper significantly dwells upon the thematic analysis of these newspapers in order to explore and find out the coverage given to various sub-themes of EBF (economic, business, and financial) journalism. Further, this study reveals the effect of high-speed algorithm-based trading, the aftermath of the fourth industrial revolution on the creative and investigative aspect of delivering financial stories by these respective newspapers. The study indicates a change heading towards an ongoing paradigm shift in the business newspaper industry with an adequate change in the source of information gathering along with the subtle increase in the coverage of financial news stories over the time.

Keywords: business-papers, business news, financial news, industrial revolution 4.0.

Procedia PDF Downloads 101