Search results for: signal detection theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9320

Search results for: signal detection theory

7490 Nonlinear Control of Mobile Inverted Pendulum: Theory and Experiment

Authors: V. Sankaranarayanan, V. Amrita Sundari, Sunit P. Gopal

Abstract:

This paper presents the design and implementation of a nonlinear controller for the point to point control of a mobile inverted pendulum (MIP). The controller is designed based on the kinematic model of the MIP to stabilize all the four coordinates. The stability of the closed-loop system is proved using Lyapunov stability theory. The proposed controller is validated through numerical simulations and also implemented in a laboratory prototype. The results are presented to evaluate the performance of the proposed closed loop system.

Keywords: mobile inverted pendulum, switched control, nonlinear systems, lyapunov stability

Procedia PDF Downloads 328
7489 Multimodal Characterization of Emotion within Multimedia Space

Authors: Dayo Samuel Banjo, Connice Trimmingham, Niloofar Yousefi, Nitin Agarwal

Abstract:

Technological advancement and its omnipresent connection have pushed humans past the boundaries and limitations of a computer screen, physical state, or geographical location. It has provided a depth of avenues that facilitate human-computer interaction that was once inconceivable such as audio and body language detection. Given the complex modularities of emotions, it becomes vital to study human-computer interaction, as it is the commencement of a thorough understanding of the emotional state of users and, in the context of social networks, the producers of multimodal information. This study first acknowledges the accuracy of classification found within multimodal emotion detection systems compared to unimodal solutions. Second, it explores the characterization of multimedia content produced based on their emotions and the coherence of emotion in different modalities by utilizing deep learning models to classify emotion across different modalities.

Keywords: affective computing, deep learning, emotion recognition, multimodal

Procedia PDF Downloads 158
7488 The Evolution and Driving Forces Analysis of Urban Spatial Pattern in Tibet Based on Archetype Theory

Authors: Qiuyu Chen, Bin Long, Junxi Yang

Abstract:

Located in the southwest of the "roof of the world", Tibet is the origin center of Tibetan Culture.Lhasa, Shigatse and Gyantse are three famous historical and cultural cities in Tibet. They have always been prominent political, economic and cultural cities, and have accumulated the unique aesthetic orientation and value consciousness of Tibet's urban construction. "Archetype" usually refers to the theoretical origin of things, which is the collective unconscious precipitation. The archetype theory fundamentally explores the dialectical relationship between image expression, original form and behavior mode. By abstracting and describing typical phenomena or imagery of the archetype object can observe the essence of objects, explore ways in which object phenomena arise. Applying archetype theory to the field of urban planning helps to gain insight, evaluation, and restructuring of the complex and ever-changing internal structural units of cities. According to existing field investigations, it has been found that Dzong, Temple, Linka and traditional residential systems are important structural units that constitute the urban space of Lhasa, Shigatse and Gyantse. This article applies the thinking method of archetype theory, starting from the imagery expression of urban spatial pattern, using technologies such as ArcGIS, Depthmap, and Computer Vision to descriptively identify the spatial representation and plane relationship of three cities through remote sensing images and historical maps. Based on historical records, the spatial characteristics of cities in different historical periods are interpreted in a hierarchical manner, attempting to clarify the origin of the formation and evolution of urban pattern imagery from the perspectives of geopolitical environment, social structure, religious theory, etc, and expose the growth laws and key driving forces of cities. The research results can provide technical and material support for important behaviors such as urban restoration, spatial intervention, and promoting transformation in the region.

Keywords: archetype theory, urban spatial imagery, original form and pattern, behavioral driving force, Tibet

Procedia PDF Downloads 64
7487 Converse to the Sherman Inequality with Applications in Information Theory

Authors: Ana Barbir, S. Ivelic Bradanovic, D. Pecaric, J. Pecaric

Abstract:

We proved a converse to Sherman's inequality. Using the concept of f-divergence we obtained some inequalities for the well-known entropies, such as Shannon entropies that have many applications in many applied sciences, for example, in information theory, biology and economics Zipf-Mandelbrot law gave improvement in account for the low-rankwords in corpus. Applications of Zipf-Mandelbrot law can be found in linguistics, information sciences and also mostly applicable in ecological eld studies. We also introduced an entropy by applying the Zipf-Mandelbrot law and derived some related inequalities.

Keywords: f-divergence, majorization inequality, Sherman inequality, Zipf-Mandelbrot entropy

Procedia PDF Downloads 169
7486 Faults Diagnosis by Thresholding and Decision tree with Neuro-Fuzzy System

Authors: Y. Kourd, D. Lefebvre

Abstract:

The monitoring of industrial processes is required to ensure operating conditions of industrial systems through automatic detection and isolation of faults. This paper proposes a method of fault diagnosis based on a neuro-fuzzy hybrid structure. This hybrid structure combines the selection of threshold and decision tree. The validation of this method is obtained with the DAMADICS benchmark. In the first phase of the method, a model will be constructed that represents the normal state of the system to fault detection. Signatures of the faults are obtained with residuals analysis and selection of appropriate thresholds. These signatures provide groups of non-separable faults. In the second phase, we build faulty models to see the flaws in the system that cannot be isolated in the first phase. In the latest phase we construct the tree that isolates these faults.

Keywords: decision tree, residuals analysis, ANFIS, fault diagnosis

Procedia PDF Downloads 625
7485 Effect of Hybridization of Composite Material on Buckling Analysis with Elastic Foundation Using the High Order Theory

Authors: Benselama Khadidja, El Meiche Noureddine

Abstract:

This paper presents the effect of hybridization material on the variation of non-dimensional critical buckling load with different cross-ply laminates plate resting on elastic foundations of Winkler and Pasternak types subjected to combine uniaxial and biaxial loading by using two variable refined plate theories. Governing equations are derived from the Principle of Virtual Displacement; the formulation is based on a new function of shear deformation theory taking into account transverse shear deformation effects vary parabolically across the thickness satisfying shear stress-free surface conditions. These equations are solved analytically using the Navier solution of a simply supported. The influence of the various parameters geometric and material, the thickness ratio, and the number of layers symmetric and antisymmetric hybrid laminates material has been investigated to find the critical buckling loads. The numerical results obtained through the present study with several examples are presented to verify and compared with other models with the ones available in the literature.

Keywords: buckling, hybrid cross-ply laminates, Winkler and Pasternak, elastic foundation, two variables plate theory

Procedia PDF Downloads 483
7484 Right Solution of Geodesic Equation in Schwarzschild Metric and Overall Examination of Physical Laws

Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim

Abstract:

108 years have passed since a great number of physicists explained astronomical and physical phenomena by solving geodesic equations in the Schwarzschild metric. However, when solving the geodesic equations in Schwarzschild metric, they did not correctly solve one branch of the component of space among spatial and temporal components of four-dimensional force and did not come up with physical laws correctly by means of physical analysis from the results obtained by solving the geodesic equations. In addition, they did not treat the astronomical and physical phenomena in a physical way based on the correct physical laws obtained from the solution of the geodesic equations in the Schwarzschild metric. Therefore, some former scholars mentioned that Einstein’s theoretical basis of a general theory of relativity was obscure and incorrect, but they did not give a correct physical solution to the problems. Furthermore, since the general theory of relativity has not given a quantitative solution to obscure and incorrect problems, the generalization of gravitational theory has not yet been successfully completed, although former scholars have thought of it and tried to do it. In order to solve the problems, it is necessary to explore the obscure and incorrect problems in a general theory of relativity based on the physical laws and to find out the methodology for solving the problems. Therefore, as the first step toward achieving this purpose, the right solution of the geodesic equation in the Schwarzschild metric has been presented. Next, the correct physical laws found by making a physical analysis of the results have been presented, the obscure and incorrect problems have been shown, and an analysis of them has been made based on the physical laws. In addition, the experimental verification of the physical laws found by us has been made.

Keywords: equivalence principle, general relativity, geometrodynamics, Schwarzschild, Poincaré

Procedia PDF Downloads 14
7483 Hybrid Graphene Based Nanomaterial as Highly Efficient Catalyst for the Electrochemical Determination of Ciprofloxacin

Authors: Tien S. H. Pham, Peter J. Mahon, Aimin Yu

Abstract:

The detection of drug molecules by voltammetry has attracted great interest over the past years. However, many drug molecules exhibit poor electrochemical signals at common electrodes which result in low sensitivity in detection. An efficient way to overcome this problem is to modify electrodes with functional materials. Since discovered in 2004, graphene (or reduced graphene oxide) has emerged as one of the most studied two-dimensional carbon materials in condensed matter physics, electrochemistry, and so on due to its exceptional physicochemical properties. Additionally, the continuous development of technology has opened the new window for the successful fabrications of many novel graphene-based nanomaterials to serve in electrochemical analysis. This research aims to synthesize and characterize gold nanoparticle coated beta-cyclodextrin functionalized reduced graphene oxide (Au NP–β-CD–RGO) nanocomposites with highly conductive and strongly electro-catalytic properties as well as excellent supramolecular recognition abilities for the modification of electrodes. The electrochemical responses of ciprofloxacin at the as-prepared nanocomposite modified electrode was effectively amplified was much higher in comparison with that at the bare electrode. The linear concentration range was from 0.01 to 120 µM, with a detection limit of 2.7 nM using differential pulse voltammetry. Thus, Au NP–β-CD–RGO nanocomposite has great potential as an ideal material to construct sensitive sensors for the electrochemical determination of ciprofloxacin or similar antibacterial drugs in the future based on its excellent stability, selectivity, and reproducibility.

Keywords: Au nanoparticles, β-CD, ciprofloxacin, electrochemical determination, graphene based nanomaterials

Procedia PDF Downloads 188
7482 Improved Reuse and Storage Performances at Room Temperature of a New Environmental-Friendly Lactate Oxidase Biosensor Made by Ambient Electrospray Deposition

Authors: Antonella Cartoni, Mattea Carmen Castrovilli

Abstract:

A biosensor for lactate detection has been developed using an environmentally friendly approach. The biosensor is based on lactate oxidase (LOX) and has remarkable capabilities for reuse and storage at room temperature. The manufacturing technique employed is ambient electrospray deposition (ESD), which enables efficient and sustainable immobilization of the LOX enzyme on a cost-effective com-mercial screen-printed Prussian blue/carbon electrode (PB/C-SPE). The study demonstrates that the ESD technology allows the biosensor to be stored at ambient pressure and temperature for extended periods without affecting the enzymatic activity. The biosensor can be stored for up to 90 days without requiring specific storage conditions, and it can be reused for up to 24 measurements on both freshly prepared electrodes and electrodes that are three months old. The LOX-based biosensor exhibits a lin-ear range of lactate detection between 0.1 and 1 mM, with a limit of detection of 0.07±0.02 mM. Ad-ditionally, it does not exhibit any memory effects. The immobilization process does not involve the use of entrapment matrices or hazardous chemicals, making it environmentally sustainable and non-toxic compared to current methods. Furthermore, the application of a electrospray deposition cycle on previously used biosensors rejuvenates their performance, making them comparable to freshly made biosensors. This highlights the excellent recycling potential of the technique, eliminating the waste as-sociated with disposable devices.

Keywords: green friendly, reuse, storage performance, immobilization, matrix-free, electrospray deposition, biosensor, lactate oxidase, enzyme

Procedia PDF Downloads 65
7481 Simulation-Based Optimization Approach for an Electro-Plating Production Process Based on Theory of Constraints and Data Envelopment Analysis

Authors: Mayada Attia Ibrahim

Abstract:

Evaluating and developing the electroplating production process is a key challenge in this type of process. The process is influenced by several factors such as process parameters, process costs, and production environments. Analyzing and optimizing all these factors together requires extensive analytical techniques that are not available in real-case industrial entities. This paper presents a practice-based framework for the evaluation and optimization of some of the crucial factors that affect the costs and production times associated with this type of process, energy costs, material costs, and product flow times. The proposed approach uses Design of Experiments, Discrete-Event Simulation, and Theory of Constraints were respectively used to identify the most significant factors affecting the production process and simulate a real production line to recognize the effect of these factors and assign possible bottlenecks. Several scenarios are generated as corrective strategies for improving the production line. Following that, data envelopment analysis CCR input-oriented DEA model is used to evaluate and optimize the suggested scenarios.

Keywords: electroplating process, simulation, design of experiment, performance optimization, theory of constraints, data envelopment analysis

Procedia PDF Downloads 97
7480 Exclusive Value Adding by iCenter Analytics on Transient Condition

Authors: Zhu Weimin, Allegorico Carmine, Ruggiero Gionata

Abstract:

During decades of Baker Hughes (BH) iCenter experience, it is demonstrated that in addition to conventional insights on equipment steady operation conditions, insights on transient conditions can add significant and exclusive value for anomaly detection, downtime saving, and predictive maintenance. Our work shows examples from the BH iCenter experience to introduce the advantages and features of using transient condition analytics: (i) Operation under critical engine conditions: e.g., high level or high change rate of temperature, pressure, flow, vibration, etc., that would not be reachable in normal operation, (ii) Management of dedicated sub-systems or components, many of which are often bottlenecks for reliability and maintenance, (iii) Indirect detection of anomalies in the absence of instrumentation, (iv) Repetitive sequences: if data is properly processed, the engineering features of transients provide not only anomaly detection but also problem characterization and prognostic indicators for predictive maintenance, (v) Engine variables accounting for fatigue analysis. iCenter has been developing and deploying a series of analytics based on transient conditions. They are contributing to exclusive value adding in the following areas: (i) Reliability improvement, (ii) Startup reliability improvement, (iii) Predictive maintenance, (iv) Repair/overhaul cost down. Illustrative examples for each of the above areas are presented in our study, focusing on challenges and adopted techniques ranging from purely statistical approaches to the implementation of machine learning algorithms. The obtained results demonstrate how the value is obtained using transient condition analytics in the BH iCenter experience.

Keywords: analytics, diagnostics, monitoring, turbomachinery

Procedia PDF Downloads 74
7479 Structural Damage Detection via Incomplete Model Data Using Output Data Only

Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.

Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation

Procedia PDF Downloads 365
7478 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 53
7477 RBF Neural Network Based Adaptive Robust Control for Bounded Position/Force Control of Bilateral Teleoperation Arms

Authors: Henni Mansour Abdelwaheb

Abstract:

This study discusses the design of a bounded position/force feedback controller developed to ensure position and force tracking for bilateral teleoperation arms operating with variable delay, and actuator saturation. Also, an adaptive robust Radial Basis Function (RBF) neural network is used to estimate the environment torque. The parameters of the environment torque are then sent from the slave site to the master site as a non-power signal to avoid passivity problems. Moreover, a nonlinear function is applied to each controller term as a smooth saturation function, providing a bounded control signal and preserving the system’s actuators. Lastly, the Lyapunov approach demonstrates the global stability of the controlled system, and numerical experiment results further confirm the validity of the presented strategy.

Keywords: teleoperation manipulators system, time-varying delay, actuator saturation, adaptive robust rbf neural network approximation, uncertainties

Procedia PDF Downloads 75
7476 From Theory to Practice: Harnessing Mathematical and Statistical Sciences in Data Analytics

Authors: Zahid Ullah, Atlas Khan

Abstract:

The rapid growth of data in diverse domains has created an urgent need for effective utilization of mathematical and statistical sciences in data analytics. This abstract explores the journey from theory to practice, emphasizing the importance of harnessing mathematical and statistical innovations to unlock the full potential of data analytics. Drawing on a comprehensive review of existing literature and research, this study investigates the fundamental theories and principles underpinning mathematical and statistical sciences in the context of data analytics. It delves into key mathematical concepts such as optimization, probability theory, statistical modeling, and machine learning algorithms, highlighting their significance in analyzing and extracting insights from complex datasets. Moreover, this abstract sheds light on the practical applications of mathematical and statistical sciences in real-world data analytics scenarios. Through case studies and examples, it showcases how mathematical and statistical innovations are being applied to tackle challenges in various fields such as finance, healthcare, marketing, and social sciences. These applications demonstrate the transformative power of mathematical and statistical sciences in data-driven decision-making. The abstract also emphasizes the importance of interdisciplinary collaboration, as it recognizes the synergy between mathematical and statistical sciences and other domains such as computer science, information technology, and domain-specific knowledge. Collaborative efforts enable the development of innovative methodologies and tools that bridge the gap between theory and practice, ultimately enhancing the effectiveness of data analytics. Furthermore, ethical considerations surrounding data analytics, including privacy, bias, and fairness, are addressed within the abstract. It underscores the need for responsible and transparent practices in data analytics, and highlights the role of mathematical and statistical sciences in ensuring ethical data handling and analysis. In conclusion, this abstract highlights the journey from theory to practice in harnessing mathematical and statistical sciences in data analytics. It showcases the practical applications of these sciences, the importance of interdisciplinary collaboration, and the need for ethical considerations. By bridging the gap between theory and practice, mathematical and statistical sciences contribute to unlocking the full potential of data analytics, empowering organizations and decision-makers with valuable insights for informed decision-making.

Keywords: data analytics, mathematical sciences, optimization, machine learning, interdisciplinary collaboration, practical applications

Procedia PDF Downloads 93
7475 Social Responsibility in the Theory of Organisation Management

Authors: Patricia Crentsil, Alvina Oriekhova

Abstract:

The aim of the study is to determine social responsibility in the theory of organisation management. The main objectives are to examine the link between accountability,transparency, and ethical onorganisation management. The study seeks to answer questions that have received inadequate attention in social responsibility literature. Specifically, how accountability, transparency of policy, and ethical aspect enhanced organisation management? The target population of the study comprises of Deans and Head of Departments of Public Universities and Technical Universities in Ghana. The study used purposive sampling technique to select the Public Universities and technical universities in Ghana and adopted simple random Technique to select 300 participants from all Technical Universities in Ghana and 500 participants from all Traditional Universities in Ghana. The sample size will be 260 using confidence level = 95%, Margin of Error = 5%. The study used both primary and secondary data. The study adopted exploratory design to address the research questions. Results indicated thataccountability, transparency, and ethical have a positive significant link with organisation management. The study suggested that management can motivate an organization to act in a socially responsible manner.

Keywords: corporate social responsibility, organisation management, organisation management theory, social responsibility

Procedia PDF Downloads 121
7474 Improved Wearable Monitoring and Treatment System for Parkinson’s Disease

Authors: Bulcha Belay Etana, Benny Malengier, Janarthanan Krishnamoorthy, Timothy Kwa, Lieva VanLangenhove

Abstract:

Electromyography measures the electrical activity of muscles using surface electrodes or needle electrodes to monitor various disease conditions. Recent developments in the signal acquisition of electromyograms using textile electrodes facilitate wearable devices, enabling patients to monitor and control their health status outside of healthcare facilities. Here, we have developed and tested wearable textile electrodes to acquire electromyography signals from patients suffering from Parkinson’s disease and incorporated a feedback-control system to relieve muscle cramping through thermal stimulus. In brief, the textile electrodes made of stainless steel was knitted into a textile fabric as a sleeve, and their electrical characteristic, such as signal-to-noise ratio, was compared with traditional electrodes. To relieve muscle cramping, a heating element made of stainless-steel conductive yarn sewn onto cotton fabric, coupled with a vibration system, was developed. The system integrated a microcontroller and a Myoware muscle sensor to activate the heating element as well as the vibration motor when cramping occurs, and at the same time, the element gets deactivated when the muscle cramping subsides. An optimum therapeutic temperature of 35.5 °C is regulated by continuous temperature monitoring to deactivate the heating system when this threshold value is reached. The textile electrode exhibited a signal-to-noise ratio of 6.38dB, comparable to that of the traditional electrode’s value of 7.05 dB. For a given 9 V power supply, the rise time was about 6 minutes for the developed heating element to reach an optimum temperature.

Keywords: smart textile system, wearable electronic textile, electromyography, heating textile, vibration therapy, Parkinson’s disease

Procedia PDF Downloads 106
7473 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach

Authors: Adeep Hande, Shubham Agarwal

Abstract:

This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.

Keywords: large language models, semi-supervised learning, sexism detection, data sparsity

Procedia PDF Downloads 70
7472 Foreign Television Programme Contents and Effects on Youths

Authors: Eyitayo Francis Adanlawo

Abstract:

Television is one of humanity’s most important means of communication, a channel through which societal norms and values can be transferred to youths. The imagination created by foreign television programmes ultimately leads to strong emotional responses. Though some foreign films and programmes are educational in nature, the view that the majority of them are inimical to the youths’ positive-believe-system is rife. This has been occasioned by the adoption of repugnant alien cultures, imitation of vulgar slangs, weird hairdo and most visibly an adjustment in values. This study theoretically approaches two research questions: do youths act out the life style of characters seeing in foreign films? Is moral decadence, indiscipline, and vulgar habits being the results of the contents of foreign programmes and films? To establish the basis for relating foreign films watched to social vices as violence, sexual pervasiveness, cultural and traditional moral pollution on youths; Observational learning Theory and Reinnforcement Theory were utilized to answer the research questions and established the effect of foreign films content on youths. We conclude that constant showcasing of violent themes was highly responsible for the upsurge in social vices prevalent among the youths and can destroy the basis of the societal, cultural orientation. Recommendations made range from the need for government to halt the importation of foreign films not censored; the need for local films to portray more positive messages and the need for concrete steps to be taken to eradicate or minimise the use of programme capable of exerting negative influence.

Keywords: media (television), moral decadence, youths, values, observation learning theory, reinforcement theory

Procedia PDF Downloads 252
7471 Exploring the ‘Many Worlds’ Interpretation in Both a Philosophical and Creative Literary Framework

Authors: Jane Larkin

Abstract:

Combining elements of philosophy, science, and creative writing, this investigation explores how a philosophically structured science-fiction novel can challenge the theory of linearity and singularity of time through the ‘many worlds’ theory. This concept is addressed through the creation of a research exegesis and accompanying creative artefact, designed to be read in conjunction with each other in an explorative, interwoven manner. Research undertaken into scientific concepts, such as the ‘many worlds’ interpretation of quantum mechanics and diverse philosophers and their ideologies on time, is embodied in an original science-fiction narrative titled, It Goes On. The five frames that make up the creative artefact are enhanced not only by five leading philosophers and their philosophies on time but by an appreciation of the research, which comes first in the paper. Research into traditional approaches to storytelling is creatively and innovatively inverted in several ways, thus challenging the singularity and linearity of time. Further nonconventional approaches to literary techniques include an abstract narrator, embodied by time, a concept, and a figure in the text, whose voice and vantage point in relation to death furthers the unreliability of the notion of time. These further challenge individuals’ understanding of complex scientific and philosophical views in a variety of ways. The science-fiction genre is essential when considering the speculative nature of It Goes On, which deals with parallel realities and is a fantastical exploration of human ingenuity in plausible futures. Therefore, this paper documents the research-led methodology used to create It Goes On, the application of the ‘many worlds’ theory within a framed narrative, and the many innovative techniques used to contribute new knowledge in a variety of fields.

Keywords: time, many-worlds theory, Heideggerian philosophy, framed narrative

Procedia PDF Downloads 84
7470 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars

Authors: Masauso Moses Phiri

Abstract:

Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.

Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic

Procedia PDF Downloads 152
7469 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data

Authors: M. Mueller, M. Kuehn, M. Voelker

Abstract:

In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).

Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing

Procedia PDF Downloads 131
7468 Constructing a Grounded Theory of Parents' Musical Engagement with Their Premature Baby Contributing to Their Emerging Parental Identity in a Neonatal Unit

Authors: Elizabeth McLean, Katrina Skewes-McFerran, Grace Thompson

Abstract:

Scholarship highlights the need to further examine and better understand and foster the process of becoming a parent to a premature baby in the neonatal context to support the critical development of the parent-infant relationship. Music therapy research documents significant benefits of music therapy on neonatal physiological and neurodevelopmental function, reduced maternal anxiety and validating parents’ relationship with their premature baby, yet limited studies examine the role of music in supporting parental identity. This was a multi-site study, exploring parents’ musical engagement with their hospitalised baby and parental identity in a NU. In-depth interviews with nine parents of a premature baby across varying time points in their NU journey took place. Data collection and analysis was influenced by Constructive Grounded Theory methodology. Findings in the form of a substantive grounded theory illuminated the contribution of parents’ musical engagement on their sense of parental identity in the NU. Specifically, the significance of their baby’s level and type of response during musical interactions in influencing parents’ capacity to engage in musical dialogue with their baby emerged. Specific conditions that acted as both barriers and fosters in parents’ musical engagement across a high- risk pregnancy and NU admission also emerged. Recommendations for future research into the role of music and music therapy in supporting parental coping and transition to parenthood during a high-risk pregnancy and birth and beyond the NU will be discussed.

Keywords: grounded theory, musical engagement, music therapy, parental identity

Procedia PDF Downloads 181
7467 A New Seperation / Precocentration and Determination Procedure Based on Solidified Floating Organic Drop Microextraction (SFODME) of Lead by Using Graphite Furnace Atomic Absorption Spectrometry

Authors: Seyda Donmez, Oya Aydin Urucu, Ece Kok Yetimoglu

Abstract:

Solidified floating organic drop microextraction was used for a preconcentration method of trace amount of lead. The analyte was complexed with 1-(2-pyridylazo)-2-naphtol and 1-undecanol, acetonitrile was added as an extraction and dispersive solvent respectively. The influences of some analytical parameters pH, volumes of extraction and disperser solvent, concentration of chelating agent, and concentration of salt were optimized. Under the optimum conditions the detection limits of Pb (II) was determined. The procedure was validated for the analysis of NCS DC 73347a hair standard reference material with satisfactory result. The developed procedure was successfully applied to food and water samples for detection of Pb (II) ions.

Keywords: analytical methods, graphite furnace atomic absorption spectrometry, heavy metals, solidified floating organic drop microextraction

Procedia PDF Downloads 277
7466 Digital Signal Processor Implementation of a Novel Sinusoidal Pulse Width Modulation Algorithm Algorithm for a Reduced Delta Inverter

Authors: Asma Ben Rhouma, Mahmoud Hamouda

Abstract:

The delta inverter is considered as the reduced three-phase dc/ac converter topology. It contains only three two-quadrant power switches compared to six in the conventional one. This reduced power conversion topology is widely considered in many industrial applications, such as electric traction and large photovoltaic systems. This paper is focused on a new sinusoidal pulse width modulation algorithm (SPWM) developed for the delta inverter. As an unconventional inverter’s structure, irregular modulating functions waveforms of the SPWM switching technique are generated. The performances of the proposed SPWM technique was proven through computer simulations carried out on a delta inverter feeding a three-phase RL load. Digital Signal Processor (DSP) implementation of the novel SPWM algorithm have been realized on a laboratory prototype of the delta inverter feeding an RL load and a squirrel cage induction motor. Experimental results have highlighted its high performances under the proposed SPWM method.

Keywords: delta inverter, SPWM, simulation, DSP implementation

Procedia PDF Downloads 164
7465 The Bayesian Premium Under Entropy Loss

Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita

Abstract:

Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.

Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation

Procedia PDF Downloads 334
7464 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 136
7463 Diagnostic Evaluation of Urinary Angiogenin (ANG) and Clusterin (CLU) as Biomarker for Bladder Cancer

Authors: Marwa I. Shabayek, Ola A. Said, Hanan A. Attaia, Heba A. Awida

Abstract:

Bladder carcinoma is an important worldwide health problem. Both cystoscopy and urine cytology used in detecting bladder cancer suffer from drawbacks where cystoscopy is an invasive method and urine cytology shows low sensitivity in low grade tumors. This study validates easier and less time-consuming techniques to evaluate the value of combined use of angiogenin and clusterin in comparison and combination with voided urine cytology in the detection of bladder cancer patients. This study includes malignant (bladder cancer patients, n= 50), benign (n=20), and healthy (n=20) groups. The studied groups were subjected to cystoscopic examination, detection of bilharzial antibodies, urine cytology, and estimation of urinary angiogenin and clusterin by ELISA. The overall sensitivity and specifcity were 66% and 75% for angiogenin, 70% and 82.5% for clusterin and 46% and 80% for voided urine cytology. Combined sensitivity of angiogenin and clusterin with urine cytology increased from 82 to 88%.

Keywords: angiogenin, bladder cancer, clusterin, cytology

Procedia PDF Downloads 297
7462 Validation and Interpretation about Precedence Diagram for Start to Finish Relationship by Graph Theory

Authors: Naoki Ohshima, Ken Kaminishi

Abstract:

Four types of dependencies, which are 'Finish-to-start', 'Finish-to-finish', 'Start-to-start' and 'Start-to-finish (S-F)' as logical relationship are modeled based on the definition by 'the predecessor activity is defined as an activity to come before a dependent activity in a schedule' in PMBOK. However, it is found a self-contradiction in the precedence diagram for S-F relationship by PMBOK. In this paper, author would like to validate logical relationship of S-F by Graph Theory and propose a new interpretation of the precedence diagram for S-F relationship.

Keywords: project time management, sequence activity, start-to-finish relationship, precedence diagram, PMBOK

Procedia PDF Downloads 270
7461 Chemical Fingerprinting of Complex Samples With the Aid of Parallel Outlet Flow Chromatography

Authors: Xavier A. Conlan

Abstract:

Speed of analysis is a significant limitation to current high-performance liquid chromatography/mass spectrometry (HPLC/MS) and ultra-high-pressure liquid chromatography (UHPLC)/MS systems both of which are used in many forensic investigations. The flow rate limitations of MS detection require a compromise in the chromatographic flow rate, which in turn reduces throughput, and when using modern columns, a reduction in separation efficiency. Commonly, this restriction is combated through the post-column splitting of flow prior to entry into the mass spectrometer. However, this results in a loss of sensitivity and a loss in efficiency due to the post-extra column dead volume. A new chromatographic column format known as 'parallel segmented flow' involves the splitting of eluent flow within the column outlet end fitting, and in this study we present its application in order to interrogate the provenience of methamphetamine samples with mass spectrometry detection. Using parallel segmented flow, column flow rates as high as 3 mL/min were employed in the analysis of amino acids without post-column splitting to the mass spectrometer. Furthermore, when parallel segmented flow chromatography columns were employed, the sensitivity was more than twice that of conventional systems with post-column splitting when the same volume of mobile phase was passed through the detector. These finding suggest that this type of column technology will particularly enhance the capabilities of modern LC/MS enabling both high-throughput and sensitive mass spectral detection.

Keywords: chromatography, mass spectrometry methamphetamine, parallel segmented outlet flow column, forensic sciences

Procedia PDF Downloads 491