Search results for: robust scheduling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1762

Search results for: robust scheduling

562 Estimation Atmospheric parameters for Weather Study and Forecast over Equatorial Regions Using Ground-Based Global Position System

Authors: Asmamaw Yehun, Tsegaye Kassa, Addisu Hunegnaw, Martin Vermeer

Abstract:

There are various models to estimate the neutral atmospheric parameter values, such as in-suite and reanalysis datasets from numerical models. Accurate estimated values of the atmospheric parameters are useful for weather forecasting and, climate modeling and monitoring of climate change. Recently, Global Navigation Satellite System (GNSS) measurements have been applied for atmospheric sounding due to its robust data quality and wide horizontal and vertical coverage. The Global Positioning System (GPS) solutions that includes tropospheric parameters constitute a reliable set of data to be assimilated into climate models. The objective of this paper is, to estimate the neutral atmospheric parameters such as Wet Zenith Delay (WZD), Precipitable Water Vapour (PWV) and Total Zenith Delay (TZD) using six selected GPS stations in the equatorial regions, more precisely, the Ethiopian GPS stations from 2012 to 2015 observational data. Based on historic estimated GPS-derived values of PWV, we forecasted the PWV from 2015 to 2030. During data processing and analysis, we applied GAMIT-GLOBK software packages to estimate the atmospheric parameters. In the result, we found that the annual averaged minimum values of PWV are 9.72 mm for IISC and maximum 50.37 mm for BJCO stations. The annual averaged minimum values of WZD are 6 cm for IISC and maximum 31 cm for BDMT stations. In the long series of observations (from 2012 to 2015), we also found that there is a trend and cyclic patterns of WZD, PWV and TZD for all stations.

Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour

Procedia PDF Downloads 36
561 Existing International Cooperation Mechanisms and Proposals to Enhance Their Effectiveness for Marine-Based Geoengineering Governance

Authors: Aylin Mohammadalipour Tofighi

Abstract:

Marine-based geoengineering methods, proposed to mitigate climate change, operate primarily through two mechanisms: reducing atmospheric carbon dioxide levels and diminishing solar absorption by the oceans. While these approaches promise beneficial outcomes, they are fraught with environmental, legal, ethical, and political challenges, necessitating robust international governance. This paper underscores the critical role of international cooperation within the governance framework, offering a focused analysis of existing international environmental mechanisms applicable to marine-based geoengineering governance. It evaluates the efficacy and limitations of current international legal structures, including treaties and organizations, in managing marine-based geoengineering, noting significant gaps such as the absence of specific regulations, dedicated international entities, and explicit governance mechanisms such as monitoring. To rectify these problems, the paper advocates for concrete steps to bolster international cooperation. These include the formulation of dedicated marine-based geoengineering guidelines within international agreements, the establishment of specialized supervisory entities, and the promotion of transparent, global consensus-building. These recommendations aim to foster governance that is environmentally sustainable, ethically sound, and politically feasible, thereby enhancing knowledge exchange, spurring innovation, and advancing the development of marine-based geoengineering approaches. This study emphasizes the importance of collaborative approaches in managing the complexities of marine-based geoengineering, contributing significantly to the discourse on international environmental governance in the face of rapid climate and technological changes.

Keywords: climate change, environmental law, international cooperation, international governance, international law, marine-based geoengineering, marine law, regulatory frameworks

Procedia PDF Downloads 42
560 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 307
559 The Relationship between Renewable Energy, Real Income, Tourism and Air Pollution

Authors: Eyup Dogan

Abstract:

One criticism of the energy-growth-environment literature, to the best of our knowledge, is that only a few studies analyze the influence of tourism on CO₂ emissions even though tourism sector is closely related to the environment. The other criticism is the selection of methodology. Panel estimation techniques that fail to consider both heterogeneity and cross-sectional dependence across countries can cause forecasting errors. To fulfill the mentioned gaps in the literature, this study analyzes the impacts of real GDP, renewable energy and tourism on the levels of carbon dioxide (CO₂) emissions for the top 10 most-visited countries around the world. This study focuses on the top 10 touristic (most-visited) countries because they receive about the half of the worldwide tourist arrivals in late years and are among the top ones in 'Renewables Energy Country Attractiveness Index (RECAI)'. By looking at Pesaran’s CD test and average growth rates of variables for each country, we detect the presence of cross-sectional dependence and heterogeneity. Hence, this study uses second generation econometric techniques (cross-sectionally augmented Dickey-Fuller (CADF), and cross-sectionally augmented IPS (CIPS) unit root test, the LM bootstrap cointegration test, and the DOLS and the FMOLS estimators) which are robust to the mentioned issues. Therefore, the reported results become accurate and reliable. It is found that renewable energy mitigates the pollution whereas real GDP and tourism contribute to carbon emissions. Thus, regulatory policies are necessary to increase the awareness of sustainable tourism. In addition, the use of renewable energy and the adoption of clean technologies in tourism sector as well as in producing goods and services play significant roles in reducing the levels of emissions.

Keywords: air pollution, tourism, renewable energy, income, panel data

Procedia PDF Downloads 163
558 Low-Cost Parking Lot Mapping and Localization for Home Zone Parking Pilot

Authors: Hongbo Zhang, Xinlu Tang, Jiangwei Li, Chi Yan

Abstract:

Home zone parking pilot (HPP) is a fast-growing segment in low-speed autonomous driving applications. It requires the car automatically cruise around a parking lot and park itself in a range of up to 100 meters inside a recurrent home/office parking lot, which requires precise parking lot mapping and localization solution. Although Lidar is ideal for SLAM, the car OEMs favor a low-cost fish-eye camera based visual SLAM approach. Recent approaches have employed segmentation models to extract semantic features and improve mapping accuracy, but these AI models are memory unfriendly and computationally expensive, making deploying on embedded ADAS systems difficult. To address this issue, we proposed a new method that utilizes object detection models to extract robust and accurate parking lot features. The proposed method could reduce computational costs while maintaining high accuracy. Once combined with vehicles’ wheel-pulse information, the system could construct maps and locate the vehicle in real-time. This article will discuss in detail (1) the fish-eye based Around View Monitoring (AVM) with transparent chassis images as the inputs, (2) an Object Detection (OD) based feature point extraction algorithm to generate point cloud, (3) a low computational parking lot mapping algorithm and (4) the real-time localization algorithm. At last, we will demonstrate the experiment results with an embedded ADAS system installed on a real car in the underground parking lot.

Keywords: ADAS, home zone parking pilot, object detection, visual SLAM

Procedia PDF Downloads 42
557 Underrepresentation of Women in Management Information Systems: Gender Differences in Key Environmental Barriers

Authors: Asli Yagmur Akbulut

Abstract:

Despite a robust and growing job market and lucrative salaries, there is a global shortage of Information Technology (IT) professionals. To make matters worse, women continue to be underrepresented in the IT workforce and among IT degree holders. In today’s knowledge based economy and society, it is extremely important to increase the presence of women in the IT field. In order to do so, it is necessary to reduce entry barriers and attract more women to pursue degrees in various IT fields including the field of Management Information Systems (MIS). Even though MIS is considered to have a more feminine nature, women still tend to avoid majoring in this field. Unfortunately, there is a lack of research that investigates the specific factors that may deter women from pursuing a degree in MIS. To address this research gap, this study examined a set of key environmental barriers that might prevent women from pursuing an MIS degree and explored whether there were any gender differences between female and male students in terms of these key barriers. Based on a survey of 280 students enrolled in an introductory level MIS course, the study empirically confirmed that there were significant differences between male and female students in terms of the key contextual barriers perceived. Female students demonstrated major concerns about gender discrimination related barriers, whereas male students were more concerned about negative social influences. Both male and female students were equally concerned about not being able to fit in well with other MIS majors. The findings have important implications for MIS programs, as the information gained can be used to design and implement specific intervention strategies to overcome the barriers and attract larger pools of women to the MIS discipline. The paper concludes with a discussion of the findings, implications, and future research directions.

Keywords: gender differences, MIS major, underrepresentation, women in IT

Procedia PDF Downloads 234
556 Cybersecurity Challenges in the Era of Open Banking

Authors: Krish Batra

Abstract:

The advent of open banking has revolutionized the financial services industry by fostering innovation, enhancing customer experience, and promoting competition. However, this paradigm shift towards more open and interconnected banking ecosystems has introduced complex cybersecurity challenges. This research paper delves into the multifaceted cybersecurity landscape of open banking, highlighting the vulnerabilities and threats inherent in sharing financial data across a network of banks and third-party providers. Through a detailed analysis of recent data breaches, phishing attacks, and other cyber incidents, the paper assesses the current state of cybersecurity within the open banking framework. It examines the effectiveness of existing security measures, such as encryption, API security protocols, and authentication mechanisms, in protecting sensitive financial information. Furthermore, the paper explores the regulatory response to these challenges, including the implementation of standards such as PSD2 in Europe and similar initiatives globally. By identifying gaps in current cybersecurity practices, the research aims to propose a set of robust, forward-looking strategies that can enhance the security and resilience of open banking systems. This includes recommendations for banks, third-party providers, regulators, and consumers on how to mitigate risks and ensure a secure open banking environment. The ultimate goal is to provide stakeholders with a comprehensive understanding of the cybersecurity implications of open banking and to outline actionable steps for safeguarding the financial ecosystem in an increasingly interconnected world.

Keywords: open banking, financial services industry, cybersecurity challenges, data breaches, phishing attacks, encryption, API security protocols, authentication mechanisms, regulatory response, PSD2, cybersecurity practices

Procedia PDF Downloads 29
555 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning

Authors: Nicholas V. Scott, Jack McCarthy

Abstract:

Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.

Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization

Procedia PDF Downloads 115
554 Optimizing Data Integration and Management Strategies for Upstream Oil and Gas Operations

Authors: Deepak Singh, Rail Kuliev

Abstract:

The abstract highlights the critical importance of optimizing data integration and management strategies in the upstream oil and gas industry. With its complex and dynamic nature generating vast volumes of data, efficient data integration and management are essential for informed decision-making, cost reduction, and maximizing operational performance. Challenges such as data silos, heterogeneity, real-time data management, and data quality issues are addressed, prompting the proposal of several strategies. These strategies include implementing a centralized data repository, adopting industry-wide data standards, employing master data management (MDM), utilizing real-time data integration technologies, and ensuring data quality assurance. Training and developing the workforce, “reskilling and upskilling” the employees and establishing robust Data Management training programs play an essential role and integral part in this strategy. The article also emphasizes the significance of data governance and best practices, as well as the role of technological advancements such as big data analytics, cloud computing, Internet of Things (IoT), and artificial intelligence (AI) and machine learning (ML). To illustrate the practicality of these strategies, real-world case studies are presented, showcasing successful implementations that improve operational efficiency and decision-making. In present study, by embracing the proposed optimization strategies, leveraging technological advancements, and adhering to best practices, upstream oil and gas companies can harness the full potential of data-driven decision-making, ultimately achieving increased profitability and a competitive edge in the ever-evolving industry.

Keywords: master data management, IoT, AI&ML, cloud Computing, data optimization

Procedia PDF Downloads 42
553 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines

Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso

Abstract:

The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.

Keywords: feature extraction, machine learning, OBIA, remote sensing

Procedia PDF Downloads 338
552 In Situ Volume Imaging of Cleared Mice Seminiferous Tubules Opens New Window to Study Spermatogenic Process in 3D

Authors: Lukas Ded

Abstract:

Studying the tissue structure and histogenesis in the natural, 3D context is challenging but highly beneficial process. Contrary to classical approach of the physical tissue sectioning and subsequent imaging, it enables to study the relationships of individual cellular and histological structures in their native context. Recent developments in the tissue clearing approaches and microscopic volume imaging/data processing enable the application of these methods also in the areas of developmental and reproductive biology. Here, using the CLARITY tissue procedure and 3D confocal volume imaging we optimized the protocol for clearing, staining and imaging of the mice seminiferous tubules isolated from the testes without cardiac perfusion procedure. Our approach enables the high magnification and fine resolution axial imaging of the whole diameter of the seminiferous tubules with possible unlimited lateral length imaging. Hence, the large continuous pieces of the seminiferous tubule can be scanned and digitally reconstructed for the study of the single tubule seminiferous stages using nuclear dyes. Furthermore, the application of the antibodies and various molecular dyes can be used for molecular labeling of individual cellular and subcellular structures and resulting 3D images can highly increase our understanding of the spatiotemporal aspects of the seminiferous tubules development and sperm ultrastructure formation. Finally, our newly developed algorithms for 3D data processing enable the massive parallel processing of the large amount of individual cell and tissue fluorescent signatures and building the robust spermatogenic models under physiological and pathological conditions.

Keywords: CLARITY, spermatogenesis, testis, tissue clearing, volume imaging

Procedia PDF Downloads 112
551 Explaining the Role of Iran Health System in Polypharmacy among the Elderly

Authors: Mohsen Shati, Seyede Salehe Mortazavi, Seyed Kazem Malakouti, Hamidreza Khanke Fazlollah Ahmadi

Abstract:

Taking unnecessary or excessive medication or using drugs with no indication (polypharmacy) by people of all ages, especially the elderly, is associated with increased adverse drug reactions (ADR), medical errors, hospitalization and escalating the costs. It may be facilitated or impeded by the healthcare system. In this study, we are going to describe the role of the health system in the practice of polypharmacy in Iranian elderly. In this Inductive qualitative content analysis using Graneheim and Lundman methods, purposeful sample selection until saturation has been made. Participants have been selected from doctors, pharmacists, policy-makers and the elderly. A total of 25 persons (9 men and 16 women) have participated in this study. Data analysis after incorporating codes with similar characteristics revealed 14 subcategories and six main categories of the referral system, physicians’ accessibility, health data management, drug market, laws enforcement, and social protection. Some of the conditions of the healthcare system have given rise to polypharmacy in the elderly. In the absence of a comprehensive specialty and subspecialty referral system, patients may go to any physician office so may well be confused about numerous doctors' prescriptions. Electronic records not being prepared for the patients, failure to comply with laws, lack of robust enforcement for the existing laws and close surveillance are among the contributing factors. Inadequate insurance and supportive services are also evident. Age-specific care providing has not yet been institutionalized, while, inadequate specialist workforce playing a major role. So, one may not ignore the health system as contributing factor in designing effective interventions to fix the problem.

Keywords: elderly, polypharmacy, health system, qualitative study

Procedia PDF Downloads 133
550 Application of Artificial Intelligence to Schedule Operability of Waterfront Facilities in Macro Tide Dominated Wide Estuarine Harbour

Authors: A. Basu, A. A. Purohit, M. M. Vaidya, M. D. Kudale

Abstract:

Mumbai, being traditionally the epicenter of India's trade and commerce, the existing major ports such as Mumbai and Jawaharlal Nehru Ports (JN) situated in Thane estuary are also developing its waterfront facilities. Various developments over the passage of decades in this region have changed the tidal flux entering/leaving the estuary. The intake at Pir-Pau is facing the problem of shortage of water in view of advancement of shoreline, while jetty near Ulwe faces the problem of ship scheduling due to existence of shallower depths between JN Port and Ulwe Bunder. In order to solve these problems, it is inevitable to have information about tide levels over a long duration by field measurements. However, field measurement is a tedious and costly affair; application of artificial intelligence was used to predict water levels by training the network for the measured tide data for one lunar tidal cycle. The application of two layered feed forward Artificial Neural Network (ANN) with back-propagation training algorithms such as Gradient Descent (GD) and Levenberg-Marquardt (LM) was used to predict the yearly tide levels at waterfront structures namely at Ulwe Bunder and Pir-Pau. The tide data collected at Apollo Bunder, Ulwe, and Vashi for a period of lunar tidal cycle (2013) was used to train, validate and test the neural networks. These trained networks having high co-relation coefficients (R= 0.998) were used to predict the tide at Ulwe, and Vashi for its verification with the measured tide for the year 2000 & 2013. The results indicate that the predicted tide levels by ANN give reasonably accurate estimation of tide. Hence, the trained network is used to predict the yearly tide data (2015) for Ulwe. Subsequently, the yearly tide data (2015) at Pir-Pau was predicted by using the neural network which was trained with the help of measured tide data (2000) of Apollo and Pir-Pau. The analysis of measured data and study reveals that: The measured tidal data at Pir-Pau, Vashi and Ulwe indicate that there is maximum amplification of tide by about 10-20 cm with a phase lag of 10-20 minutes with reference to the tide at Apollo Bunder (Mumbai). LM training algorithm is faster than GD and with increase in number of neurons in hidden layer and the performance of the network increases. The predicted tide levels by ANN at Pir-Pau and Ulwe provides valuable information about the occurrence of high and low water levels to plan the operation of pumping at Pir-Pau and improve ship schedule at Ulwe.

Keywords: artificial neural network, back-propagation, tide data, training algorithm

Procedia PDF Downloads 453
549 RNAseq Reveals Hypervirulence-Specific Host Responses to M. tuberculosis Infection

Authors: Gina Leisching, Ray-Dean Pietersen, Carel Van Heerden, Paul Van Helden, Ian Wiid, Bienyameen Baker

Abstract:

The distinguishing factors that characterize the host response to infection with virulent Mycobacterium tuberculosis (M.tb) are largely confounding. We present an infection study with two genetically closely related M.tb strains that have vastly different pathogenic characteristics. The early host response to infection with these detergent-free cultured strains was analyzed through RNAseq in an attempt to provide information on the subtleties which may ultimately contribute to the virulent phenotype. Murine bone marrow-derived macrophages (BMDMs) were infected with either a hyper- (R5527) or hypovirulent (R1507) Beijing M. tuberculosis clinical isolate. RNAseq revealed 69 differentially expressed host genes in BMDMs during comparison of these two transcriptomes. Pathway analysis revealed activation of the stress-induced and growth inhibitory Gadd45 signaling pathway in hypervirulent infected BMDMs. Upstream regulators of interferon activation such as and IRF3 and IRF7 were predicted to be upregulated in hypovirulent-infected BMDMs. Additional analysis of the host immune response through ELISA and qPCR included the use of human THP-1 macrophages where a robust proinflammatory response was observed after infection with the hypervirulent strain. RNAseq revealed two early-response genes (IER3 and SAA3) and two host-defence genes (OASL1 and SLPI) that were significantly upregulated by the hypervirulent strain. The role of these genes under M.tb infection conditions are largely unknown but here we provide validation of their presence with use of qPCR and Western blot. Further analysis into their biological role under infection with virulent M.tb is required.

Keywords: host-response, Mycobacterium tuberculosis, RNAseq, virulence

Procedia PDF Downloads 193
548 Non Enzymatic Electrochemical Sensing of Glucose Using Manganese Doped Nickel Oxide Nanoparticles Decorated Carbon Nanotubes

Authors: Anju Joshi, C. N. Tharamani

Abstract:

Diabetes is one of the leading cause of death at present and remains an important concern as the prevalence of the disease is increasing at an alarming rate. Therefore, it is crucial to diagnose the accurate levels of glucose for developing an efficient therapeutic for diabetes. Due to the availability of convenient and compact self-testing, continuous monitoring of glucose is feasible nowadays. Enzyme based electrochemical sensing of glucose is quite popular because of its high selectivity but suffers from drawbacks like complicated purification and immobilization procedures, denaturation, high cost, and low sensitivity due to indirect electron transfer. Hence, designing a robust enzyme free platform using transition metal oxides remains crucial for the efficient and sensitive determination of glucose. In the present work, manganese doped nickel oxide nanoparticles (Mn-NiO) has been synthesized onto the surface of multiwalled carbon nanotubes using a simple microwave assisted approach for non-enzymatic electrochemical sensing of glucose. The morphology and structure of the synthesized nanostructures were characterized using scanning electron microscopy (SEM) and X-Ray diffraction (XRD). We demonstrate that the synthesized nanostructures show enormous potential for electrocatalytic oxidation of glucose with high sensitivity and selectivity. Cyclic voltammetry and square wave voltammetry studies suggest superior sensitivity and selectivity of Mn-NiO decorated carbon nanotubes towards the non-enzymatic determination of glucose. A linear response between the peak current and the concentration of glucose has been found to be in the concentration range of 0.01 μM- 10000 μM which suggests the potential efficacy of Mn-NiO decorated carbon nanotubes for sensitive determination of glucose.

Keywords: diabetes, glucose, Mn-NiO decorated carbon nanotubes, non-enzymatic

Procedia PDF Downloads 208
547 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images

Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek

Abstract:

Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.

Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection

Procedia PDF Downloads 306
546 Uncertainty and Volatility in Middle East and North Africa Stock Market during the Arab Spring

Authors: Ameen Alshugaa, Abul Mansur Masih

Abstract:

This paper sheds light on the economic impacts of political uncertainty caused by the civil uprisings that swept the Arab World and have been collectively known as the Arab Spring. Measuring documented effects of political uncertainty on regional stock market indices, we examine the impact of the Arab Spring on the volatility of stock markets in eight countries in the Middle East and North Africa (MENA) region: Egypt, Lebanon, Jordon, United Arab Emirate, Qatar, Bahrain, Oman and Kuwait. This analysis also permits testing the existence of financial contagion among equity markets in the MENA region during the Arab Spring. To capture the time-varying and multi-horizon nature of the evidence of volatility and contagion in the eight MENA stock markets, we apply two robust methodologies on consecutive data from November 2008 to March 2014: MGARCH-DCC, Continuous Wavelet Transforms (CWT). Our results indicate two key findings. First, the discrepancies between volatile stock markets of countries directly impacted by the Arab Spring and countries that were not directly impacted indicate that international investors may still enjoy portfolio diversification and investment in MENA markets. Second, the lack of financial contagion during the Arab Spring suggests that there is little evidence of cointegration among MENA markets. Providing a general analysis of the economic situation and the investment climate in the MENA region during and after the Arab Spring, this study bear significant importance for policy makers, local and international investors, and market regulators.

Keywords: Portfolio Diversification , MENA Region , Stock Market Indices, MGARCH-DCC, Wavelet Analysis, CWT

Procedia PDF Downloads 269
545 Mindfulness and Mental Resilience Training for Pilots: Enhancing Cognitive Performance and Stress Management

Authors: Nargiza Nuralieva

Abstract:

The study delves into assessing the influence of mindfulness and mental resilience training on the cognitive performance and stress management of pilots. Employing a meticulous literature search across databases such as Medline and Google Scholar, the study used specific keywords to target a wide array of studies. Inclusion criteria were stringent, focusing on peer-reviewed studies in English that utilized designs like randomized controlled trials, with a specific interest in interventions related to mindfulness or mental resilience training for pilots and measured outcomes pertaining to cognitive performance and stress management. The initial literature search identified a pool of 123 articles, with subsequent screening resulting in the exclusion of 77 based on title and abstract. The remaining 54 articles underwent a more rigorous full-text screening, leading to the exclusion of 41. Additionally, five studies were selected from the World Health Organization's clinical trials database. A total of 11 articles from meta-analyses were retained for examination, underscoring the study's dedication to a meticulous and robust inclusion process. The interventions varied widely, incorporating mixed approaches, Cognitive behavioral Therapy (CBT)-based, and mindfulness-based techniques. The analysis uncovered positive effects across these interventions. Specifically, mixed interventions demonstrated a Standardized Mean Difference (SMD) of 0.54, CBT-based interventions showed an SMD of 0.29, and mindfulness-based interventions exhibited an SMD of 0.43. Long-term effects at a 6-month follow-up suggested sustained impacts for both mindfulness-based (SMD: 0.63) and CBT-based interventions (SMD: 0.73), albeit with notable heterogeneity.

Keywords: mindfulness, mental resilience, pilots, cognitive performance, stress management

Procedia PDF Downloads 29
544 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 250
543 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 135
542 Drop Impact Study on Flexible Superhydrophobic Surface Containing Micro-Nano Hierarchical Structures

Authors: Abinash Tripathy, Girish Muralidharan, Amitava Pramanik, Prosenjit Sen

Abstract:

Superhydrophobic surfaces are abundant in nature. Several surfaces such as wings of butterfly, legs of water strider, feet of gecko and the lotus leaf show extreme water repellence behaviour. Self-cleaning, stain-free fabrics, spill-resistant protective wears, drag reduction in micro-fluidic devices etc. are few applications of superhydrophobic surfaces. In order to design robust superhydrophobic surface, it is important to understand the interaction of water with superhydrophobic surface textures. In this work, we report a simple coating method for creating large-scale flexible superhydrophobic paper surface. The surface consists of multiple layers of silanized zirconia microparticles decorated with zirconia nanoparticles. Water contact angle as high as 159±10 and contact angle hysteresis less than 80 was observed. Drop impact studies on superhydrophobic paper surface were carried out by impinging water droplet and capturing its dynamics through high speed imaging. During the drop impact, the Weber number was varied from 20 to 80 by altering the impact velocity of the drop and the parameters such as contact time, normalized spread diameter were obtained. In contrast to earlier literature reports, we observed contact time to be dependent on impact velocity on superhydrophobic surface. Total contact time was split into two components as spread time and recoil time. The recoil time was found to be dependent on the impact velocity while the spread time on the surface did not show much variation with the impact velocity. Further, normalized spreading parameter was found to increase with increase in impact velocity.

Keywords: contact angle, contact angle hysteresis, contact time, superhydrophobic

Procedia PDF Downloads 396
541 Associations and Interactions of Delivery Mode and Antibiotic Exposure with Infant Cortisol Level: A Correlational Study

Authors: Samarpreet Singh, Gerald Giesbrecht

Abstract:

Both c-section and antibiotic exposure are linked to gut microbiota imbalance in infants. Such disturbance is associated with the Hypothalamic-Pituitary-Adrenal (HPA) axis function. However, the literature only has contradicting evidence for the association between c-sections and the HPA axis. Therefore, this study aims to test if the mode of delivery and antibiotics exposure is associated with the HPA axis. Also, whether exposure to both interacts with the HPA-axis. It was hypothesized that associations and interactions would be observed. Secondary data analysis was used for this co-relational study. Data for the mode of delivery and antibiotics exposure variables were documented from hospital records or self-questionnaires. In addition, cortisol levels (Area under the curve with respect to increasing (AUCi) and Area under the curve with respect to ground (AUCg)) were based on saliva collected from three months old during the infant’s visit to the lab and after drawing blood. One-way and between-subject ANOVA analyses were run on data. No significant association between delivery mode and infant cortisol level was found, AUCi and AUCg, p > .05. Only the infant’s AUCg was found to be significantly higher if there were antibiotics exposure at delivery (p = .001) or their mothers were exposed during pregnancy (p < .05). Infants born by c-section and exposed to antibiotics at three months had higher AUCi than those born vaginally, p < .02. These results imply that antibiotic exposure before three months is associated with an infant’s stress response. The association might increase if antibiotic exposure occurs three months after a c-section birth. However, more robust and causal evidence in future studies is needed, given a variable group’s statistically weak sample size. Nevertheless, the results of this study still highlight the unintended consequences of antibiotic exposure during delivery and pregnancy.

Keywords: HPA-axis, antibiotics, c-section, gut-microbiota, development, stress

Procedia PDF Downloads 46
540 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 104
539 The Planner's Pentangle: A Proposal for a 21st-Century Model of Planning for Sustainable Development

Authors: Sonia Hirt

Abstract:

The Planner's Triangle, an oft-cited model that visually defined planning as the search for sustainability to balance the three basic priorities of equity, economy, and environment, has influenced planning theory and practice for a quarter of a century. In this essay, we argue that the triangle requires updating and expansion. Even if planners keep sustainability as their key core aspiration at the center of their imaginary geometry, the triangle's vertices have to be rethought. Planners should move on to a 21st-century concept. We propose a Planner's Pentangle with five basic priorities as vertices of a new conceptual polygon. These five priorities are Wellbeing, Equity, Economy, Environment, and Esthetics (WE⁴). The WE⁴ concept more accurately and fully represents planning’s history. This is especially true in the United States, where public art and public health played pivotal roles in the establishment of the profession in the late 19th and early 20th centuries. It also more accurately represents planning’s future. Both health/wellness and aesthetic concerns are becoming increasingly important in the 21st century. The pentangle can become an effective tool for understanding and visualizing planning's history and present. Planning has a long history of representing urban presents and future as conceptual models in visual form. Such models can play an important role in understanding and shaping practice. For over two decades, one such model, the Planner's Triangle, stood apart as the expression of planning's pursuit for sustainability. But if the model is outdated and insufficiently robust, it can diminish our understanding of planning practice, as well as the appreciation of the profession among non-planners. Thus, we argue for a new conceptual model of what planners do.

Keywords: sustainable development, planning for sustainable development, planner's triangle, planner's pentangle, planning and health, planning and art, planning history

Procedia PDF Downloads 115
538 Innovation of a New Plant Tissue Culture Medium for Large Scale Plantlet Production in Potato (Solanum tuberosum L.)

Authors: Ekramul Hoque, Zinat Ara Eakut Zarin, Ershad Ali

Abstract:

The growth and development of explants is governed by the effect of nutrient medium. Ammonium nitrate (NH4NO3) as a major salt of stock solution-1 for the preparation of tissue culture medium. But, it has several demerits on human civilization. It is use for the preparation of bomb and other destructive activities. Hence, it is totally ban in our country. A new chemical was identified as a substitute of ammonium nitrate. The concentrations of the other ingredients of major and minor salt were modified from the MS medium. The formulation of new medium is totally different from the MS nutrient composition. The most widely use MS medium composition was used as first check treatment and MS powder (Duchefa Biocheme, The Netherland) was used as second check treatment. The experiments were carried out at the Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh. Two potato varieties viz. Diamant and Asterix were used as experimental materials. The regeneration potentiality of potato onto new medium was best as compare with the two check treatments. The traits -node number, leaf number, shoot length, root lengths were highest in new medium. The plantlets were healthy, robust and strong as compare to plantlets regenerated from check treatments. Three subsequent sub-cultures were made in the new medium to observe the growth pattern of plantlet. It was also showed the best performance in all the parameter under studied. The regenerated plantlet produced good quality minituber under field condition. Hence, it is concluded that, a new plant tissue culture medium as discovered from the Department of Biotechnology, Sher-e-Bangla Agricultural University, Dhaka, Bangladesh under the leadership of Professor Dr. Md. Ekramul Hoque.

Keywords: new medium, potato, regeneration, ammonium nitrate

Procedia PDF Downloads 60
537 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 28
536 Physical Modeling of Woodwind Ancient Greek Musical Instruments: The Case of Plagiaulos

Authors: Dimitra Marini, Konstantinos Bakogiannis, Spyros Polychronopoulos, Georgios Kouroupetroglou

Abstract:

Archaemusicology cannot entirely depend on the study of the excavated ancient musical instruments as most of the time their condition is not ideal (i.e., missing/eroded parts) and moreover, because of the concern damaging the originals during the experiments. Researchers, in order to overcome the above obstacles, build replicas. This technique is still the most popular one, although it is rather expensive and time-consuming. Throughout the last decades, the development of physical modeling techniques has provided tools that enable the study of musical instruments through their digitally simulated models. This is not only a more cost and time-efficient technique but also provides additional flexibility as the user can easily modify parameters such as their geometrical features and materials. This paper thoroughly describes the steps to create a physical model of a woodwind ancient Greek instrument, Plagiaulos. This instrument could be considered as the ancestor of the modern flute due to the common geometry and air-jet excitation mechanism. Plagiaulos is comprised of a single resonator with an open end and a number of tone holes. The combination of closed and open tone holes produces the pitch variations. In this work, the effects of all the instrument’s components are described by means of physics and then simulated based on digital waveguides. The synthesized sound of the proposed model complies with the theory, highlighting its validity. Further, the synthesized sound of the model simulating the Plagiaulos of Koile (2nd century BCE) was compared with its replica build in our laboratory by following the scientific methodologies of archeomusicology. The aforementioned results verify that robust dynamic digital tools can be introduced in the field of computational, experimental archaemusicology.

Keywords: archaeomusicology, digital waveguides, musical acoustics, physical modeling

Procedia PDF Downloads 78
535 Elements of Sector Benchmarking in Physical Education Curriculum: An Indian Perspective

Authors: Kalpana Sharma, Jyoti Mann

Abstract:

The study was designed towards institutional analysis for a clear understanding of the process involved in functioning and layout of determinants influencing physical education teacher’s education program in India. This further can be recommended for selection of parameters for creating sector benchmarking for physical education teachers training institutions across India. 165 stakeholders involving students, teachers, parents, administrators were surveyed from the identified seven institutions and universities from different states of India. They were surveyed on the basis of seven broad parameters which were associated with the post graduate physical education program in India. A physical education program assessment tool of 52 items was designed to administer it among the stakeholders selected for the survey. An item analysis of the contents was concluded through the review process from selected experts working in higher education with experience in teacher training program in physical education. The data was collected from the stakeholders of the selected institutions through Physical Education Program Assessment Tool (PEPAT). The hypothesis that PE teacher education program is independent of physical education institutions was significant. The study directed a need towards robust admission process emphasizing on identification, selection of potential candidates and quality control of intake with the scientific process developed according to the Indian education policies and academic structure. The results revealed that the universities do not have similar functional and delivery process related to the physical education teacher training program. The study reflects towards the need for physical education universities and institutions to identify the best practices to be followed regarding the functioning of delivery of physical education programs at various institutions through strategic management studies on the identified parameters before establishing strict standards and norms for achieving excellence in physical education in India.

Keywords: assessment, benchmarking, curriculum, physical education, teacher education

Procedia PDF Downloads 524
534 Empathy and Yoga Philosophy: Both Eastern and Western Concepts

Authors: Jacqueline Jasmine Kumar

Abstract:

This paper seeks to challenge the predominate Western-centric paradigm concerning empathy by conducting an exploration of its presence within both Western and Eastern philosophical traditions. The primary focus of this inquiry is the examination of the Indian yogic tradition, encompassing the four yogas: bhakti (love/devotion), karma (action), jnāna (knowledge), and rāja (psychic control). Through this examination, it is demonstrated that empathy does not exclusively originate from Western philosophical thought. Rather than superimposing the Western conceptualization of empathy onto the tenets of Indian philosophy, this study endeavours to unearth a distinct array of ideas and concepts within the four yogas, which significantly contribute to our comprehension of empathy as a universally relevant phenomenon. To achieve this objective, an innovative approach is adopted, delving into various facets of empathy, including the propositional, affective/intuitive, perspective-taking, and actionable dimensions. This approach intentionally deviates from conventional Western frameworks, shifting the emphasis towards lived morally as opposed to engagement in abstract theoretical discourse. While it is acknowledged that the explicit term “empathy” may not be overly articulated within the yogic tradition, a scrupulous examination reveals the underlying substance and significance of this phenomenon. Throughout this comparative analysis, the paper aims to lay a robust foundation for the discourse of empathy within the contexts of the human experience. By assimilating insights gleaned from the Indian yogic tradition, it contributes to the expansion of our comprehension of empathy, enabling an exploration of its multifaceted dimensions. Ultimately, this scholarly endeavour facilitates the development of a more comprehensive and inclusive perspective on empathy, transcending cultural boundaries and enriching our collective repository of knowledge.

Keywords: Bhakti, Yogic, Jnana, Karma

Procedia PDF Downloads 48
533 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 337