Search results for: linear complexity
1952 A Study of ZY3 Satellite Digital Elevation Model Verification and Refinement with Shuttle Radar Topography Mission
Authors: Bo Wang
Abstract:
As the first high-resolution civil optical satellite, ZY-3 satellite is able to obtain high-resolution multi-view images with three linear array sensors. The images can be used to generate Digital Elevation Models (DEM) through dense matching of stereo images. However, due to the clouds, forest, water and buildings covered on the images, there are some problems in the dense matching results such as outliers and areas failed to be matched (matching holes). This paper introduced an algorithm to verify the accuracy of DEM that generated by ZY-3 satellite with Shuttle Radar Topography Mission (SRTM). Since the accuracy of SRTM (Internal accuracy: 5 m; External accuracy: 15 m) is relatively uniform in the worldwide, it may be used to improve the accuracy of ZY-3 DEM. Based on the analysis of mass DEM and SRTM data, the processing can be divided into two aspects. The registration of ZY-3 DEM and SRTM can be firstly performed using the conjugate line features and area features matched between these two datasets. Then the ZY-3 DEM can be refined by eliminating the matching outliers and filling the matching holes. The matching outliers can be eliminated based on the statistics on Local Vector Binning (LVB). The matching holes can be filled by the elevation interpolated from SRTM. Some works are also conducted for the accuracy statistics of the ZY-3 DEM.Keywords: ZY-3 satellite imagery, DEM, SRTM, refinement
Procedia PDF Downloads 3441951 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3391950 Noninvasive Continuous Glucose Monitoring Device Using a Photon-Assisted Tunneling Photodetector Based on a Quantum Metal-Oxide-Semiconductor
Authors: Wannakorn Sangthongngam, Melissa Huerta, Jaewoo Kim, Doyeon Kim
Abstract:
Continuous glucose monitoring systems are essential for diabetics to avoid health complications but come at a costly price, especially when insurance does not fully cover the diabetic testing kits needed. This paper proposes a noninvasive continuous glucose monitoring system to provide an accessible, low-cost, and painless alternative method of accurate glucose measurements to help improve quality of life. Using a light source with a wavelength of 850nm illuminates the fingertip for the photodetector to detect the transmitted light. Utilizing SeeDevice’s photon-assisted tunneling photodetector (PAT-PD)-based QMOS™ sensor, fluctuations of voltage based on photon absorption in blood cells are comparable to traditional glucose measurements. The performance of the proposed method was validated using 4 test participants’ transmitted voltage readings compared with measurements obtained from the Accu-Chek glucometer. The proposed method was able to successfully measure concentrations from linear regression calculations.Keywords: continuous glucose monitoring, non-invasive continuous glucose monitoring, NIR, photon-assisted tunneling photodetector, QMOS™, wearable device
Procedia PDF Downloads 981949 Rights, Differences and Inclusion: The Role of Transdisciplinary Approach in the Education for Diversity
Authors: Ana Campina, Maria Manuela Magalhaes, Eusebio André Machado, Cristina Costa-Lobo
Abstract:
Inclusive school advocates respect for differences, for equal opportunities and for a quality education for all, including for students with special educational needs. In the pursuit of educational equity, guaranteeing equality in access and results, it becomes the responsibility of the school to recognize students' needs, adapting to the various styles and rhythms of learning, ensuring the adequacy of curricula, strategies and resources, materials and humans. This paper presents a set of theoretical reflections in the disciplinary interface between legal and education sciences, school administration and management, with the aim of understand the real inclusion characteristics in a balance with the inclusion policies and the need(s) of an education for Human Rights, especially for diversity. Considering the actual social complexity but the important education instruments and strategies, mostly patented in the policies, this paper aims expose the existing contexts opposed to the laws, policies and inclusion educational needs. More than a single study, this research aims to develop a map of the reality and the guidelines to implement the action. The results point to the usefulness and pertinence of a school in which educational managers, teachers, parents, and students, are involved in the creation, implementation and monitoring of flexible curricula and adapted to the educational needs of students, promoting a collaborative work among teachers. We are then faced with a scenario that points to the need to reflect on the legislation and curricular management of inclusive classes and to operationalize the processes of elaboration of curricular adaptations and differentiation in the classroom. The transdisciplinary is a pedagogic and social education perfect approach using the Human Rights binomio – teaching and learning – supported by the inclusion laws according to the realistic needs for an effective successful society construction.Keywords: rights, transdisciplinary, inclusion policies, education for diversity
Procedia PDF Downloads 3891948 An Improved Visible Range Absorption Spectroscopy on Soil Macronutrient
Authors: Suhaila Isaak, Yusmeeraz Yusof, Khairunnisa Mohd Yusof, Ahmad Safuan Abdul Rashid
Abstract:
Soil fertility is commonly evaluated by soil macronutrients such as nitrate, potassium, and phosphorus contents. Optical spectroscopy is an emerging technology which is rapid and simple has been widely used in agriculture to measure soil fertility. For visible and near infrared absorption spectroscopy, the absorbed light level in is useful for soil macro-nutrient measurement. This is because the absorption of light in a soil sample influences sensitivity of the measurement. This paper reports the performance of visible and near infrared absorption spectroscopy in the 400–1400 nm wavelength range using light-emitting diode as the excitation light source to predict the soil macronutrient content of nitrate, potassium, and phosphorus. The experimental results show an improved linear regression analysis of various soil specimens based on the Beer–Lambert law to determine sensitivity of soil spectroscopy by evaluating the absorption of characteristic peaks emitted from a light-emitting diode and detected by high sensitivity optical spectrometer. This would denote in developing a simple and low-cost soil spectroscopy with light-emitting diode for future implementation.Keywords: macronutrients absorption, optical spectroscopy, soil, absorption
Procedia PDF Downloads 2931947 Quadratic Convective Flow of a Micropolar Fluid in a Non-Darcy Porous Medium with Convective Boundary Condition
Authors: Ch. Ramreddy, P. Naveen, D. Srinivasacharya
Abstract:
The objective of the present study is to investigate the effect of nonlinear temperature and concentration on the mixed convective flow of micropolar fluid over an inclined flat plate in a non-Darcy porous medium in the presence of convective boundary condition. In order to analyze all the essential features, the transformed nonlinear conservation equations are worked out numerically by spectral method. By insisting the comparison between vertical, horizontal and inclined plates, the physical quantities of the flow and its characteristics are exhibited graphically and quantitatively with various parameters. An increase in the coupling number and inclination of angle tend to decrease the skin friction, mass transfer rate and the reverse change is there in wall couple stress and heat transfer rate. The nominal effect on the wall couple stress and skin friction is encountered whereas the significant effect on the local heat and mass transfer rates are found for high enough values of Biot number.Keywords: convective boundary condition, micropolar fluid, non-darcy porous medium, non-linear convection, spectral method
Procedia PDF Downloads 2791946 Detection of Intravenous Infiltration Using Impedance Parameters in Patients in a Long-Term Care Hospital
Authors: Ihn Sook Jeong, Eun Joo Lee, Jae Hyung Kim, Gun Ho Kim, Young Jun Hwang
Abstract:
This study investigated intravenous (IV) infiltration using bioelectrical impedance for 27 hospitalized patients in a long-term care hospital. Impedance parameters showed significant differences before and after infiltration as follows. First, the resistance (R) after infiltration significantly decreased compared to the initial resistance. This indicates that the IV solution flowing from the vein due to infiltration accumulates in the extracellular fluid (ECF). Second, the relative resistance at 50 kHz was 0.94 ± 0.07 in 9 subjects without infiltration and was 0.75 ± 0.12 in 18 subjects with infiltration. Third, the magnitude of the reactance (Xc) decreased after infiltration. This is because IV solution and blood components released from the vein tend to aggregate in the cell membrane (and acts analogously to the linear/parallel circuit), thereby increasing the capacitance (Cm) of the cell membrane and reducing the magnitude of reactance. Finally, the data points plotted in the R-Xc graph were distributed on the upper right before infiltration but on the lower left after infiltration. This indicates that the infiltration caused accumulation of fluid or blood components in the epidermal and subcutaneous tissues, resulting in reduced resistance and reactance, thereby lowering integrity of the cell membrane. Our findings suggest that bioelectrical impedance is an effective method for detection of infiltration in a noninvasive and quantitative manner.Keywords: intravenous infiltration, impedance, parameters, resistance, reactance
Procedia PDF Downloads 1821945 Using Power Flow Analysis for Understanding UPQC’s Behaviors
Authors: O. Abdelkhalek, A. Naimi, M. Rami, M. N. Tandjaoui, A. Kechich
Abstract:
This paper deals with the active and reactive power flow analysis inside the unified power quality conditioner (UPQC) during several cases. The UPQC is a combination of shunt and series active power filter (APF). It is one of the best solutions towards the mitigation of voltage sags and swells problems on distribution network. This analysis can provide the helpful information to well understanding the interaction between the series filter, the shunt filter, the DC bus link and electrical network. The mathematical analysis is based on active and reactive power flow through the shunt and series active power filter. Wherein series APF can absorb or deliver the active power to mitigate a swell or sage voltage where in the both cases it absorbs a small reactive power quantity whereas the shunt active power absorbs or releases the active power for stabilizing the storage capacitor’s voltage as well as the power factor correction. The voltage sag and voltage swell are usually interpreted through the DC bus voltage curves. These two phenomena are introduced in this paper with a new interpretation based on the active and reactive power flow analysis inside the UPQC. For simplifying this study, a linear load is supposed in this digital simulation. The simulation results are carried out to confirm the analysis done.Keywords: UPQC, Power flow analysis, shunt filter, series filter.
Procedia PDF Downloads 5721944 Mathematical Modelling of a Low Tip Speed Ratio Wind Turbine for System Design Evaluation
Authors: Amir Jalalian-Khakshour, T. N. Croft
Abstract:
Vertical Axis Wind Turbine (VAWT) systems are becoming increasingly popular as they have a number of advantages over traditional wind turbines. The advantages are reliability, ease of transportation and manufacturing. These attributes could make these technologies useful in developing economies. The performance characteristics of a VAWT are different from a horizontal axis wind turbine, which can be attributed to the low tip speed ratio operation. To unlock the potential of these VAWT systems, the operational behaviour in a number of system topologies and environmental conditions needs to be understood. In this study, a non-linear dynamic simulation method was developed in Matlab and validated against in field data of a large scale, 8-meter rotor diameter prototype. This simulation method has been utilised to determine the performance characteristics of a number of control methods and system topologies. The motivation for this research was to develop a simulation method which accurately captures the operating behaviour and is computationally inexpensive. The model was used to evaluate the performance through parametric studies and optimisation techniques. The study gave useful insights into the applications and energy generation potential of this technology.Keywords: power generation, renewable energy, rotordynamics, wind energy
Procedia PDF Downloads 3041943 Unsteady Heat and Mass Transfer in MHD Flow of Nanofluids over Stretching Sheet with a Non Uniform Heat Source/Sink
Authors: Bandari Shankar, Yohannes Yirga
Abstract:
In this paper, the problem of heat and mass transfer in unsteady MHD boundary-layer flow of nanofluids over stretching sheet with a non uniform heat source/sink is considered. The unsteadiness in the flow and temperature is caused by the time-dependent stretching velocity and surface temperature. The unsteady boundary layer equations are transformed to a system of non-linear ordinary differential equations and solved numerically using Keller box method. The velocity, temperature, and concentration profiles were obtained and utilized to compute the skin-friction coefficient, local Nusselt number, and local Sherwood number for different values of the governing parameters viz. solid volume fraction parameter, unsteadiness parameter, magnetic field parameter, Schmidt number, space-dependent and temperature-dependent parameters for heat source/sink. A comparison of the numerical results of the present study with previously published data revealed an excellent agreementKeywords: unsteady, heat and mass transfer, manetohydrodynamics, nanofluid, non-uniform heat source/sink, stretching sheet
Procedia PDF Downloads 2751942 A Machine Learning Approach for Detecting and Locating Hardware Trojans
Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He
Abstract:
The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.Keywords: hardware trojans, physical properties, machine learning, hardware security
Procedia PDF Downloads 1471941 Theoretical Method for Full Ab-Initio Calculation of Rhenium Carbide Compound
Abstract:
First principles calculations are carried out to investigate the structural, electronic, and elastic properties of the utraincompressible materials, namely, noble metal carbide of Rhenium carbide (ReC) in four phases, the rocksalt (NaCl-B1), zinc blende (ZB-B2), the tungsten carbide(Bh) (WC), and the nickel arsenide (NiAs-B8).The ground state properties such as the equilibrium lattice constant, elastic constants, the bulk modulus its pressure derivate, and the hardness of ReC in these phases are systematically predicted by calculations from first–principles. The corresponding calculated bulk modulus is comparable with that of diamond, especially for the B8 –type rhenium carbide (ReC), the incompressibility along the c axis is demonstrated to exceed the linear incompressibility of diamond. Our calculations confirm in the nickel arsenide (B8) structure the ReC is found to be stable with a large bulk modulus B=440 GPa and the tungsten carbide (WC) structure becomes the most more favourable with to respect B3 and B1 structures, which ReC- WC is meta-stable. Furthermore, the highest bulk modulus values in the zinc blende (B3), rock salt (B1), tungsten carbide (WC), and the nickel arsenide (B8) structures (294GPa, 401GPa, 415GPa and 447 GPa, respectively) indicates that ReC is a hard material, and is superhard compound H(B8)= 36 GPa compared with the H(diamond)=96 GPa and H(c BN)=63.10 GPa.Keywords: DFT, FP-LMTO, mechanical properties, elasticity, high pressure, thermodynamic properties, hard material
Procedia PDF Downloads 4411940 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling
Authors: Negar Riazifar, Nigel G. Stocks
Abstract:
This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals do not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.Keywords: level crossing sampling, numerical stability, speech processing, trigonometric polynomial
Procedia PDF Downloads 1461939 Image Enhancement of Histological Slides by Using Nonlinear Transfer Function
Authors: D. Suman, B. Nikitha, J. Sarvani, V. Archana
Abstract:
Histological slides provide clinical diagnostic information about the subjects from the ancient times. Even with the advent of high resolution imaging cameras the image tend to have some background noise which makes the analysis complex. A study of the histological slides is done by using a nonlinear transfer function based image enhancement method. The method processes the raw, color images acquired from the biological microscope, which, in general, is associated with background noise. The images usually appearing blurred does not convey the intended information. In this regard, an enhancement method is proposed and implemented on 50 histological slides of human tissue by using nonlinear transfer function method. The histological image is converted into HSV color image. The luminance value of the image is enhanced (V component) because change in the H and S components could change the color balance between HSV components. The HSV image is divided into smaller blocks for carrying out the dynamic range compression by using a linear transformation function. Each pixel in the block is enhanced based on the contrast of the center pixel and its neighborhood. After the processing the V component, the HSV image is transformed into a colour image. The study has shown improvement of the characteristics of the image so that the significant details of the histological images were improved.Keywords: HSV space, histology, enhancement, image
Procedia PDF Downloads 3291938 Assessing the Financial Impact of Federal Benefit Program Enrollment on Low-income Households
Authors: Timothy Scheinert, Eliza Wright
Abstract:
Background: Link Health is a Boston-based non-profit leveraging in-person and digital platforms to promote health equity. Its primary aim is to financially support low-income individuals through enrollment in federal benefit programs. This study examines the monetary impact of enrollment in several benefit programs. Methodologies: Approximately 17,000 individuals have been screened for eligibility via digital outreach, community events, and in-person clinics. Enrollment and financial distributions are evaluated across programs, including the Affordable Connectivity Program (ACP), Lifeline, LIHEAP, Transitional Aid to Families with Dependent Children (TAFDC), and the Supplemental Nutrition Assistance Program (SNAP). Major Findings: A total of 1,895 individuals have successfully applied, collectively distributing an estimated $1,288,152.00 in aid. The largest contributors to this sum include: ACP: 1,149 enrollments, $413,640 distributed annually. Child Care Financial Assistance (CCFA): 15 enrollments, $240,000 distributed annually. Lifeline: 602 enrollments, $66,822 distributed annually. LIHEAP: 25 enrollments, $48,750 distributed annually. SNAP: 41 enrollments, $123,000 distributed annually. TAFDC: 21 enrollments, $341,760 distributed annually. Conclusions: These results highlight the role of targeted outreach and effective enrollment processes in promoting access to federal benefit programs. High enrollment rates in ACP and Lifeline demonstrate a considerable need for affordable broadband and internet services. Programs like CCFA and TAFDC, despite lower enrollment numbers, provide sizable support per individual. This analysis advocates for continued funding of federal benefit programs. Future efforts can be made to develop screening tools that identify eligibility for multiple programs and reduce the complexity of enrollment.Keywords: benefits, childcare, connectivity, equity, nutrition
Procedia PDF Downloads 271937 Decision Making in Medicine and Treatment Strategies
Authors: Kamran Yazdanbakhsh, Somayeh Mahmoudi
Abstract:
Three reasons make good use of the decision theory in medicine: 1. Increased medical knowledge and their complexity makes it difficult treatment information effectively without resorting to sophisticated analytical methods, especially when it comes to detecting errors and identify opportunities for treatment from databases of large size. 2. There is a wide geographic variability of medical practice. In a context where medical costs are, at least in part, by the patient, these changes raise doubts about the relevance of the choices made by physicians. These differences are generally attributed to differences in estimates of probabilities of success of treatment involved, and differing assessments of the results on success or failure. Without explicit criteria for decision, it is difficult to identify precisely the sources of these variations in treatment. 3. Beyond the principle of informed consent, patients need to be involved in decision-making. For this, the decision process should be explained and broken down. A decision problem is to select the best option among a set of choices. The problem is what is meant by "best option ", or know what criteria guide the choice. The purpose of decision theory is to answer this question. The systematic use of decision models allows us to better understand the differences in medical practices, and facilitates the search for consensus. About this, there are three types of situations: situations certain, risky situations, and uncertain situations: 1. In certain situations, the consequence of each decision are certain. 2. In risky situations, every decision can have several consequences, the probability of each of these consequences is known. 3. In uncertain situations, each decision can have several consequences, the probability is not known. Our aim in this article is to show how decision theory can usefully be mobilized to meet the needs of physicians. The decision theory can make decisions more transparent: first, by clarifying the data systematically considered the problem and secondly by asking a few basic principles should guide the choice. Once the problem and clarified the decision theory provides operational tools to represent the available information and determine patient preferences, and thus assist the patient and doctor in their choices.Keywords: decision making, medicine, treatment strategies, patient
Procedia PDF Downloads 5791936 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 971935 Modelling Water Usage for Farming
Authors: Ozgu Turgut
Abstract:
Water scarcity is a problem for many regions which requires immediate action, and solutions cannot be postponed for a long time. It is known that farming consumes a significant portion of usable water. Although in recent years, the efforts to make the transition to dripping or spring watering systems instead of using surface watering started to pay off. It is also known that this transition is not necessarily translated into an increase in the capacity dedicated to other water consumption channels such as city water or power usage. In order to control and allocate the water resource more purposefully, new watering systems have to be used with monitoring abilities that can limit the usage capacity for each farm. In this study, a decision support model which relies on a bi-objective stochastic linear optimization is proposed, which takes crop yield and price volatility into account. The model generates annual planting plans as well as water usage limits for each farmer in the region while taking the total value (i.e., profit) of the overall harvest. The mathematical model is solved using the L-shaped method optimally. The decision support model can be especially useful for regional administrations to plan next year's planting and water incomes and expenses. That is why not only a single optimum but also a set of representative solutions from the Pareto set is generated with the proposed approach.Keywords: decision support, farming, water, tactical planning, optimization, stochastic, pareto
Procedia PDF Downloads 741934 Efficient Compact Micro Dielectric Barrier Discharge (DBD) Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems
Authors: D. Kuvshinov, A. Siswanto, J. Lozano-Parada, W. Zimmerman
Abstract:
Ozone is well known as a powerful fast reaction rate oxidant. The ozone based processes produce no by-product left as a non-reacted ozone returns back to the original oxygen molecule. Therefore an application of ozone is widely accepted as one of the main directions for a sustainable and clean technologies development. There are number of technologies require ozone to be delivered to specific points of a production network or reactors construction. Due to space constrains, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units. Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented. At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28E-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure. The MROG construction makes it applicable for emerged and dry systems. With a robust compact design MROG can be used as incorporated unit for production lines of high complexity.Keywords: dielectric barrier discharge (DBD), micro reactor, ozone, plasma
Procedia PDF Downloads 3381933 Interrogating Bishwas: Reimagining a Christian Neighbourhood in Kolkata, India
Authors: Abhijit Dasgupta
Abstract:
This paper explores the everyday lives of the Christians residing in a Bengali Christian neighborhood in Kolkata, termed here as the larger Christian para (para meaning neighborhood in Bengali). Through ethnography and reading of secondary sources, the paper discerns how various Christians across denominations – Protestants, Catholics and Pentecostals implicate the role of bishwas (faith and belief) in their interpersonal neighborhood relations. The paper attempts to capture the role of bishwas in producing, transforming and revising the meaning of 'neighbourhood' and 'neighbours' and puts forward the argument of the neighbourhood as a theological product. By interrogating and interpreting bishwas through everyday theological discussions and reflections, the paper examines and analyses the ways everyday theology becomes an essential source of power and knowledge for the Bengali Christians in reimagining their neighbourhood compared to the nearby Hindu neighbourhoods. Borrowing literature from everyday theology, faith and belief, the paper reads and analyses various interpretations of theological knowledge across denominations to probe the prominence of bishwas within the Christian community and its role in creating a difference in their place of dwelling. The paper argues that the meaning of neighbourhood is revisited through prayers, sermons and biblical verses. At the same time, the divisions and fissures are seen among Protestants and Catholics and also among native Bengali Protestants and non-native Protestant pastors, which informs us about the complexity of theology in constituting everyday life. Thus, the paper addresses theology's role in creating an ethical Christian neighbourhood amidst everyday tensions and hostilities of diverse religious persuasions. At the same time, it looks into the processes through which multiple theological knowledge leads to schism and interdenominational hostilities. By attempting to answer these questions, the paper brings out Christians' negotiation with the neighbourhood.Keywords: anthropology, bishwas, christianity, neighbourhood, theology
Procedia PDF Downloads 871932 Characterization of Chest Pain in Patients Consulting to the Emergency Department of a Health Institution High Level of Complexity during 2014-2015, Medellin, Colombia
Authors: Jorge Iván Bañol-Betancur, Lina María Martínez-Sánchez, María de los Ángeles Rodríguez-Gázquez, Estefanía Bahamonde-Olaya, Ana María Gutiérrez-Tamayo, Laura Isabel Jaramillo-Jaramillo, Camilo Ruiz-Mejía, Natalia Morales-Quintero
Abstract:
Acute chest pain is a distressing sensation between the diaphragm and the base of the neck and it represents a diagnostic challenge for any physician in the emergency department. Objective: To establish the main clinical and epidemiological characteristics of patients who present with chest pain to the emergency department in a private clinic from the city of Medellin, during 2014-2015. Methods: Cross-sectional retrospective observational study. Population and sample were patients who consulted for chest pain in the emergency department who met the eligibility criteria. The information was analyzed in SPSS program vr.21; qualitative variables were described through relative frequencies, and the quantitative through mean and standard deviation or medians according to their distribution in the study population. Results: A total of 231 patients were evaluated, the mean age was 49.5 ± 19.9 years, 56.7% were females. The most frequent pathological antecedents were hypertension 35.5%, diabetes 10,8%, dyslipidemia 10.4% and coronary disease 5.2%. Regarding pain features, in 40.3% of the patients the pain began abruptly, in 38.2% it had a precordial location, for 20% of the cases physical activity acted as a trigger, and 60.6% was oppressive. Costochondritis was the most common cause of chest pain among patients with an established etiologic diagnosis, representing the 18.2%. Conclusions: Although the clinical features of pain reported coincide with the clinical presentation of an acute coronary syndrome, the most common cause of chest pain in study population was costochondritis instead, indicating that it is a differential diagnostic in the approach of patients with pain acute chest.Keywords: acute coronary syndrome, chest pain, epidemiology, osteochondritis
Procedia PDF Downloads 3431931 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 6211930 Comparison of Support Vector Machines and Artificial Neural Network Classifiers in Characterizing Threatened Tree Species Using Eight Bands of WorldView-2 Imagery in Dukuduku Landscape, South Africa
Authors: Galal Omer, Onisimo Mutanga, Elfatih M. Abdel-Rahman, Elhadi Adam
Abstract:
Threatened tree species (TTS) play a significant role in ecosystem functioning and services, land use dynamics, and other socio-economic aspects. Such aspects include ecological, economic, livelihood, security-based, and well-being benefits. The development of techniques for mapping and monitoring TTS is thus critical for understanding the functioning of ecosystems. The advent of advanced imaging systems and supervised learning algorithms has provided an opportunity to classify TTS over fragmenting landscape. Recently, vegetation maps have been produced using advanced imaging systems such as WorldView-2 (WV-2) and robust classification algorithms such as support vectors machines (SVM) and artificial neural network (ANN). However, delineation of TTS in a fragmenting landscape using high resolution imagery has widely remained elusive due to the complexity of the species structure and their distribution. Therefore, the objective of the current study was to examine the utility of the advanced WV-2 data for mapping TTS in the fragmenting Dukuduku indigenous forest of South Africa using SVM and ANN classification algorithms. The results showed the robustness of the two machine learning algorithms with an overall accuracy (OA) of 77.00% (total disagreement = 23.00%) for SVM and 75.00% (total disagreement = 25.00%) for ANN using all eight bands of WV-2 (8B). This study concludes that SVM and ANN classification algorithms with WV-2 8B have the potential to classify TTS in the Dukuduku indigenous forest. This study offers relatively accurate information that is important for forest managers to make informed decisions regarding management and conservation protocols of TTS.Keywords: artificial neural network, threatened tree species, indigenous forest, support vector machines
Procedia PDF Downloads 5151929 Meet Automotive Software Safety and Security Standards Expectations More Quickly
Authors: Jean-François Pouilly
Abstract:
This study addresses the growing complexity of embedded systems and the critical need for secure, reliable software. Traditional cybersecurity testing methods, often conducted late in the development cycle, struggle to keep pace. This talk explores how formal methods, integrated with advanced analysis tools, empower C/C++ developers to 1) Proactively address vulnerabilities and bugs, which includes formal methods and abstract interpretation techniques to identify potential weaknesses early in the development process, reducing the reliance on penetration and fuzz testing in later stages. 2) Streamline development by focusing on bugs that matter, with close to no false positives and catching flaws earlier, the need for rework and retesting is minimized, leading to faster development cycles, improved efficiency and cost savings. 3) Enhance software dependability which includes combining static analysis using abstract interpretation with full context sensitivity, with hardware memory awareness allows for a more comprehensive understanding of potential vulnerabilities, leading to more dependable and secure software. This approach aligns with industry best practices (ISO2626 or ISO 21434) and empowers C/C++ developers to deliver robust, secure embedded systems that meet the demands of today's and tomorrow's applications. We will illustrate this approach with the TrustInSoft analyzer to show how it accelerates verification for complex cases, reduces user fatigue, and improves developer efficiency, cost-effectiveness, and software cybersecurity. In summary, integrating formal methods and sound Analyzers enhances software reliability and cybersecurity, streamlining development in an increasingly complex environment.Keywords: safety, cybersecurity, ISO26262, ISO24434, formal methods
Procedia PDF Downloads 191928 Narrative Constructs and Environmental Engagement: A Textual Analysis of Climate Fiction’s Role in Shaping Sustainability Consciousness
Authors: Dean J. Hill
Abstract:
This paper undertakes the task of conducting an in-depth textual analysis of the cli-fi genre. It examines how writing in the genre contributes to expressing and facilitating the articulation of environmental consciousness through the form of narrative. The paper begins by situating cli-fi within the literary continuum of ecological narratives and identifying the unique textual characteristics and thematic preoccupations of this area. The paper unfolds how cli-fi transforms the esoteric nature of climate science into credible narrative forms by drawing on language use, metaphorical constructs, and narrative framing. It also involves how descriptive and figurative language in the description of nature and disaster makes climate change so vivid and emotionally resonant. The work also points out the dialogic nature of cli-fi, whereby the characters and the narrators experience inner disputes in the novel regarding the ethical dilemma of environmental destruction, thus demanding the readers challenge and re-evaluate their standpoints on sustainability and ecological responsibilities. The paper proceeds with analysing the feature of narrative voice and its role in eliciting empathy, as well as reader involvement with the ecological material. In looking at how different narratorial perspectives contribute to the emotional and cognitive reaction of the reader to text, this study demonstrates the profound power of perspective in developing intimacy with the dominating concerns. Finally, the emotional arc of cli-fi narratives, running its course over themes of loss, hope, and resilience, is analysed in relation to how these elements function to marshal public feeling and discourse into action around climate change. Therefore, we can say that the complexity of the text in the cli-fi not only shows the hard edge of the reality of climate change but also influences public perception and behaviour toward a more sustainable future.Keywords: cli-fi genre, ecological narratives, emotional arc, narrative voice, public perception
Procedia PDF Downloads 311927 Thyroid Hormones and Thyrotropin Status in Nepalese Postmenopausal Women
Authors: S. A. Khan, B. Mishra, O. Sherchan
Abstract:
Background and Aims: Thyroid disorder is the most common endocrine disorder after diabetes mellitus. Females are more vulnerable to this disease, and old age is an important risk factor. This study was undertaken to investigate the burden of thyroid disorder in Nepalese postmenopausal women. Methods: In the present cross-sectional study, we included 271 post-menopausal women. Three ml of blood was collected following standard protocol after taking the written consent. Serum was separated and analyzed for free T3, free T4, and Thyroid Stimulating Hormone (TSH) by Chemiluminescence Immunoassay (CLIA) method in Snibe Maglumi 1000 analyzer. Data obtained was analyzed in SPSS Version 21. P < 0.05 was set for statistical significant at 95% Confidence Interval (CI). Results: Majority of the participants belong to Janjati (46.5%) ethnicity, followed by Brahmin/Chhetri (41.7%), residing either in urban or suburban locality. Most of them were non-vegetarian, non-smoker, and non-alcoholic. Subjects were divided into hyperthyroid (TSH < 0.3 uIU/ml), hypothyroid (TSH > 4.5 uIU/ml), and euthyroid (TSH=0.3-4.5 uIU/ml) based on TSH value. We reported 10.3% hyperthyroid and 29.2% hypothyroid cases. TSH was significantly correlated with T3 (r=-0.244; p < 0.001) T4 (r=-0.398; p < 0.001); age (r=-0.138; p=0.023) and BMI (r=0.123; p=0.043). Multiple linear regression model for TSH reveals only T3 and T4 were significantly associated with TSH (p < 0.001; p=0.001). Conclusion: To conclude, nearly 39.5% of the postmenopausal women had thyroid disorder. Postmenopausal women are vulnerable to thyroid disorder; therefore, requires regular thyroid monitoring.Keywords: thyroid stimulating hormone, TSH, T3, T4, thyroid disorder
Procedia PDF Downloads 1311926 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Various prominent scholars and substantial practitioner-oriented literature on innovation orientation have shown positive effects on firm performance. There is a myriad of factors that influence and enhance innovation but it has been found in the literature that new product innovations accounted for an average of 14 percent of sales revenues for all firms. If there is one thing that has changed in innovation management during the last decade, it is the growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Purchasing function in an organization is extremely important as it deals on an average of 50% or more of a firm's expenditures. In the nineties the purchasing department was largely seen as a transaction-oriented, clerical function but today purchasing integration provides a formal interface mechanism between purchasing and other firm functions that services other functions within the company. Purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work since it has been found in the literature that the higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Most researches on organization of purchasing function were done by case study analysis of innovative firms. This work actually tends to prove or discard results found in the literature based on case study method. A large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey served as a basis for analysis.Keywords: purchasing function organization, innovation, technological risk, GMRG 4 survey
Procedia PDF Downloads 4821925 Structural Optimization of Shell and Arched Structures
Authors: Mitchell Gohnert, Ryan Bradley
Abstract:
This paper reviews some fundamental concepts of structural optimization, which are based on the type of materials used in construction and the shape of the structure. The first step in structural optimization is to break down all internal forces in a structure into fundamental stresses, which are tensions and compressions. Knowing the stress patterns directs our selection of structural shapes and the most appropriate type of construction material. In our selection of materials, it is essential to understand all construction materials have flaws, or micro-cracks, which reduce the capacity of the material, especially when subjected to tensions. Because of material defects, many construction materials perform significantly better when subjected to compressive forces. Structures are also more efficient if bending moments are eliminated. Bending stresses produce high peak stresses at each face of the member, and therefore, substantially more material is required to resist bending. The shape of the structure also has a profound effect on stress levels. Stress may be reduced dramatically by simply changing the shape. Catenary, triangular and linear shapes are the fundamental structural forms to achieve optimal stress flow. If the natural flow of stress matches the shape of the structures, the most optimal shape is determined.Keywords: arches, economy of stresses, material strength, optimization, shells
Procedia PDF Downloads 1161924 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 1361923 Robustness Analysis of the Carbon and Nitrogen Co-Metabolism Model of Mucor mucedo
Authors: Nahid Banihashemi
Abstract:
An emerging important area of the life sciences is systems biology, which involves understanding the integrated behavior of large numbers of components interacting via non-linear reaction terms. A centrally important problem in this area is an understanding of the co-metabolism of protein and carbohydrate, as it has been clearly demonstrated that the ratio of these metabolites in diet is a major determinant of obesity and related chronic disease. In this regard, we have considered a systems biology model for the co-metabolism of carbon and nitrogen in colonies of the fungus Mucor mucedo. Oscillations are an important diagnostic of underlying dynamical processes of this model. The maintenance of specific patterns of oscillation and its relation to the robustness of this system are the important issues which have been targeted in this paper. In this regard, parametric sensitivity approach as a theoretical approach has been considered for the analysis of the robustness of this model. As a result, the parameters of the model which produce the largest sensitivities have been identified. Furthermore, the largest changes that can be made in each parameter of the model without losing the oscillations in biomass production have been computed. The results are obtained from the implementation of parametric sensitivity analysis in Matlab.Keywords: system biology, parametric sensitivity analysis, robustness, carbon and nitrogen co-metabolism, Mucor mucedo
Procedia PDF Downloads 328