Search results for: star network
954 Global Healthcare Village Based on Mobile Cloud Computing
Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar
Abstract:
Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy
Procedia PDF Downloads 376953 The Effect of Global Value Chain Participation on Environment
Authors: Piyaphan Changwatchai
Abstract:
Global value chain is important for current world economy through foreign direct investment. Multinational enterprises' efficient location seeking for each stage of production lead to global production network and more global value chain participation of several countries. Global value chain participation has several effects on participating countries in several aspects including the environment. The effect of global value chain participation on the environment is ambiguous. As a result, this research aims to study the effect of global value chain participation on countries' CO₂ emission and methane emission by using quantitative analysis with secondary panel data of sixty countries. The analysis is divided into two types of global value chain participation, which are forward global value chain participation and backward global value chain participation. The results show that, for forward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Forward global value chain participation negatively affects CO₂ emission and methane emission. As for backward global value chain participation, GDP per capita affects two types of pollutants in downward bell curve shape. Backward global value chain participation negatively affects methane emission only. However, when considering Asian countries, forward global value chain participation positively affects CO₂ emission. The recommendations of this research are that countries participating in global value chain should promote production with effective environmental management in each stage of value chain. The examples of policies are providing incentives to private sectors, including domestic producers and MNEs, for green production technology and efficient environment management and engaging in international agreements in terms of green production. Furthermore, government should regulate each stage of production in value chain toward green production, especially for Asia countries.Keywords: CO₂ emission, environment, global value chain participation, methane emission
Procedia PDF Downloads 190952 Short-Term Forecast of Wind Turbine Production with Machine Learning Methods: Direct Approach and Indirect Approach
Authors: Mamadou Dione, Eric Matzner-lober, Philippe Alexandre
Abstract:
The Energy Transition Act defined by the French State has precise implications on Renewable Energies, in particular on its remuneration mechanism. Until then, a purchase obligation contract permitted the sale of wind-generated electricity at a fixed rate. Tomorrow, it will be necessary to sell this electricity on the Market (at variable rates) before obtaining additional compensation intended to reduce the risk. This sale on the market requires to announce in advance (about 48 hours before) the production that will be delivered on the network, so to be able to predict (in the short term) this production. The fundamental problem remains the variability of the Wind accentuated by the geographical situation. The objective of the project is to provide, every day, short-term forecasts (48-hour horizon) of wind production using weather data. The predictions of the GFS model and those of the ECMWF model are used as explanatory variables. The variable to be predicted is the production of a wind farm. We do two approaches: a direct approach that predicts wind generation directly from weather data, and an integrated approach that estimâtes wind from weather data and converts it into wind power by power curves. We used machine learning techniques to predict this production. The models tested are random forests, CART + Bagging, CART + Boosting, SVM (Support Vector Machine). The application is made on a wind farm of 22MW (11 wind turbines) of the Compagnie du Vent (that became Engie Green France). Our results are very conclusive compared to the literature.Keywords: forecast aggregation, machine learning, spatio-temporal dynamics modeling, wind power forcast
Procedia PDF Downloads 215951 Women Academics' Insecure Identity at Work: A Millennials Phenomenon
Authors: Emmanouil Papavasileiou, Nikos Bozionelos, Liza Howe-Walsh, Sarah Turnbull
Abstract:
Purpose: The research focuses on women academics’ insecure identity at work and examines its link with generational identity. The aim is to enrich understanding of identities at work as a crucial attribute of managing academics in the context of the proliferation of managerialist controls of audit, accountability, monitoring, and performativity. Methodology: Positivist quantitative methodology was utilized. Data were collected from the Scientific Women's Academic Network (SWAN) Charter. Responses from 155 women academics based in the British Higher Education system were analysed. Findings: Analysis showed high prevalence of strong imposter feelings among participants, suggesting high insecurity at work among women academics in the United Kingdom. Generational identity was related to imposter feelings. In particular, Millennials scored significantly higher than the other generational groups. Research implications: The study shows that imposter feelings are variously manifested among the prevalent generations of women academics, while generational identity is a significant antecedent of such feelings. Research limitations: Caution should be exercised in generalizing the findings to national cultural contexts beyond the United Kingdom. Practical and social implications: Contrary to popular depictions of Millennials as self-centered, narcissistic, materialistic and demanding, women academics who are members of this generational group appear significantly more insecure than the preceding generations. Value: The study provides insightful understandings into women academics’ identity at work as a function of generational identity, and provides a fruitful avenue for further research within and beyond this gender group and profession.Keywords: academics, generational diversity, imposter feelings, United Kingdom, women, work identity
Procedia PDF Downloads 145950 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake
Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina
Abstract:
TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.Keywords: b-value, moment tensor, seismotectonics, source parameters
Procedia PDF Downloads 312949 The Implementation of Educational Partnerships for Undergraduate Students at Yogyakarta State University
Authors: Broto Seno
Abstract:
This study aims to describe and examine more in the implementation of educational partnerships for undergraduate students at Yogyakarta State University (YSU), which is more focused on educational partnerships abroad. This study used descriptive qualitative approach. The study subjects consisted of a vice-rector, two staff education partnerships, four vice-dean, nine undergraduate students and three foreign students. Techniques of data collection using interviews and document review. Validity test of the data source using triangulation. Data analysis using flow models Miles and Huberman, namely data reduction, data display, and conclusion. Results of this study showed that the implementation of educational partnerships abroad for undergraduate students at YSU meets six of the nine indicators of the success of strategic partnerships. Six indicators are long-term, strategic, mutual trust, sustainable competitive advantages, mutual benefit for all the partners, and the separate and positive impact. The indicator has not been achieved is cooperative development, successful, and world class / best practice. These results were obtained based on the discussion of the four formulation of the problem, namely: 1) Implementation and development of educational partnerships abroad has been running good enough, but not maximized. 2) Benefits of the implementation of educational partnerships abroad is providing learning experiences for students, institutions of experience in comparison to each faculty, and improving the network of educational partnerships for YSU toward World Class University. 3) The sustainability of educational partnerships abroad is pursuing a strategy of development through improved management of the partnership. 4) Supporting factors of educational partnerships abroad is the support of YSU, YSU’s partner and society. Inhibiting factors of educational partnerships abroad is not running optimally management.Keywords: partnership, education, YSU, institutions and faculties
Procedia PDF Downloads 331948 Evaluation and Possibilities of Valorization of Ecotourism Potentials in the Mbam and Djerem National Park
Authors: Rinyu Shei Mercy
Abstract:
Protected areas are the potential areas for the development of ecotourism because of their biodiversity, landscapes, waterfalls, lakes, caves, salt lick and cultural heritage of local or indigenous people. These potentials have not yet been valorized, so this study will enable to investigate the evaluation and possibilities of valorization of ecotourism potentials in the Mbam and Djerem National Park. Hence, this was done by employing a combination of field observations, examination, data collection and evaluation, using a SWOT analysis. The SWOT provides an analysis to determine the strengths, weaknesses, opportunities and threats, and strategic suggestions for ecological planning. The study helps to determine an ecotouristic inventory and mapping of ecotourism potentials of the park, evaluate the degree of valorization of these potentials and the possibilities of valorization. Finally, the study has proven that the park has much natural potentials such as rivers, salt licks, waterfall and rapids, lakes, caves and rocks, etc. Also, from the study, it was realized that as concerns the degree of valorization of these ecotourism potentials, 50% of the population visit the salt lick of Pkayere because it’s a biodiversity hotspot and rich in mineral salt attracting a lot of animals and the least is the lake Miyere with 1% due to the fact that it is sacred. Moreover, from the results, there are possibilities that these potentials can be valorized and put into use because of their attractive nature such as creating good roads and bridges, good infrastructural facilities, good communication network etc. So, the study recommends that, in this process, MINTOUR, WCS, tour operators must interact sufficiently in order to develop the potential interest to ecotourism, ecocultural tourism and scientific tourism.Keywords: ecotourism, national park Mbam and Djerem, valorization of biodiversity, protected areas of Cameroon
Procedia PDF Downloads 135947 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks
Authors: Emad A. Mohammed
Abstract:
The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.Keywords: permeability, hydraulic flow units, artificial intelligence, correlation
Procedia PDF Downloads 135946 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks
Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha
Abstract:
This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G
Procedia PDF Downloads 140945 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 129944 Changing Dynamics of Women Entrepreneurship: A Literature Review of a Decade
Authors: Viral Nagori, Preeti Shroff, Prathana Dodia
Abstract:
The paper presents the study on women entrepreneurship over the last decade in Indian and Global Context. This research study has its basis primarily in the literature review. The research methodology classifies the literature review paper based on different parameters of women entrepreneurship. The literature review relies on research papers in journals, articles in periodicals, and books published on women entrepreneurship. To accomplish this, the criteria included finding the most relevant, recent, and cited studies on women entrepreneurship over the last decade. It aims to evaluate the issues and challenges faced by women entrepreneurs. The finding suggested that there are several common obstacles, which hinders the pathway to success towards being a successful woman entrepreneur. The paper also describes such common obstacles like the level of education, family responsibilities, lack of business information, religious and cultural constraints, limited mobility, exposure, lack of working capital, and more. The in-depth analysis of literature review indicates that despite the numerous barriers, the arrival of social media has played a crucial role in enabling women to start and scale up their enterprises. Further, technology innovation has given them access to have relevant market information, increase reach and network with the customers. It enabled them to achieve work life balance and pursuing entrepreneur in them. The paper also describes the Government and Nongovernmental initiatives for promotion of women entrepreneurship. At the end, the study provides insights into the changing dynamics of women entrepreneurship in the current scenario and future prospects.Keywords: changing dynamics, government initiatives, literature review, social media, technology innovation, women entrepreneurship
Procedia PDF Downloads 154943 African Women in Power: An Analysis of the Representation of Nigerian Business Women in Television
Authors: Ifeanyichukwu Valerie Oguafor
Abstract:
Women generally have been categorized and placed under the chain of business industry, sometimes highly regarded and other times merely. The social construction of womanhood does not in all sense support a woman going into business, let alone succeed in it because it is believed that it a man’s world. In a typical patriarchal setting, a woman is expected to know nothing more domestic roles. For some women, this is not the case as they have been able to break these barriers to excel in business amidst these social setting and stereotypes. This study examines media representation of Nigerians business women, using content analysis of TV interviews as media text, framing analysis as an approach in qualitative methodology, The study further aims to analyse media frames of two Nigerian business women: FolorunshoAlakija, a business woman in the petroleum industry with current net worth 1.1 billion U.S dollars, emerging as the richest black women in the world 2014. MosunmolaAbudu, a media magnate in Nigeria who launched the first Africa’s global black entertainment and lifestyle network in 2013. This study used six predefined frames: the business woman, the myth of business women, the non-traditional woman, women in leading roles, the family woman, the religious woman, and the philanthropist woman to analyse the representation of Nigerian business women in the media. The analysis of the aforementioned frames on TV interviews with these women reveals that the media perpetually reproduces existing gender stereotype and do not challenge patriarchy. Women face challenges in trying to succeed in business while trying to keep their homes stable. This study concludes that the media represent and reproduce gender stereotypes in spite of the expectation of empowering women. The media reduces these women’s success insignificant rather than a role model for women in society.Keywords: representation of business women in the media, business women in Nigeria, framing in the media, patriarchy, women's subordination
Procedia PDF Downloads 158942 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization
Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman
Abstract:
The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation
Procedia PDF Downloads 364941 Facing Global Competition through Participation in Global Innovation Networks: The Case of Mechatronics District in the Veneto Region
Authors: Monica Plechero
Abstract:
Many firms belonging to Italian industrial districts faced a crisis starting from 2000 and upsurging during 2008-2014. To remain competitive in the global market, these firms and their local systems need to renovate their traditional competitive advantages, strengthen their link with global flows of knowledge. This may be particularly relevant in sectors such as the mechatronics, that combine traditional knowledge domain with new knowledge domains (e.g. mechanics, electronics, and informatics). This sector is nowadays one of the key sectors within the so-called ‘smart specialization strategy’ that can lead part of the Italian traditional industry towards new economic developmental opportunities. This paper, by investigating the mechatronics district of the Veneto region, wants to shed new light on how firms of a local system can gain from the globalization of innovation and innovation networks. Methodologically, the paper relies on primary data collected through a survey targeting firms of the local system, as well as on a number of qualitative case studies. The relevant role of medium size companies in the district emerges as evident, as they have wider opportunities to be involved in different processes of globalization of innovation. Indeed, with respect to small companies, the size of medium firms allows them to exploit strategically international markets and globally distributed knowledge. Supporting medium firms’ global innovation strategies, and incentivizing their role as district gatekeepers, may strengthen the competitive capability of the local system and provide new opportunities to positively face global competition.Keywords: global innovation network, industrial district, internationalization, innovation, mechatronics, Veneto region
Procedia PDF Downloads 230940 An Investigation into the Impacts of High-Frequency Electromagnetic Fields Utilized in the 5G Technology on Insects
Authors: Veriko Jeladze, Besarion Partsvania, Levan Shoshiashvili
Abstract:
This paper addresses a very topical issue today. The frequency range 2.5-100 GHz contains frequencies that have already been used or will be used in modern 5G technologies. The wavelengths used in 5G systems will be close to the body dimensions of small size biological objects, particularly insects. Because the body and body parts dimensions of insects at these frequencies are comparable with the wavelength, the high absorption of EMF energy in the body tissues can occur(body resonance) and therefore can cause harmful effects, possibly the extinction of some of them. An investigation into the impact of radio-frequency nonionizing electromagnetic field (EMF) utilized in the future 5G on insects is of great importance as a very high number of 5G network components will increase the total EMF exposure in the environment. All ecosystems of the earth are interconnected. If one component of an ecosystem is disrupted, the whole system will be affected (which could cause cascading effects). The study of these problems is an important challenge for scientists today because the existing studies are incomplete and insufficient. Consequently, the purpose of this proposed research is to investigate the possible hazardous impact of RF-EMFs (including 5G EMFs) on insects. The project will study the effects of these EMFs on various insects that have different body sizes through computer modeling at frequencies from 2.5 to 100 GHz. The selected insects are honey bee, wasp, and ladybug. For this purpose, the detailed 3D discrete models of insects are created for EM and thermal modeling through FDTD and will be evaluated whole-body Specific Absorption Rates (SAR) at selected frequencies. All these studies represent a novelty. The proposed study will promote new investigations about the bio-effects of 5G-EMFs and will contribute to the harmonization of safe exposure levels and frequencies of 5G-EMFs'.Keywords: electromagnetic field, insect, FDTD, specific absorption rate (SAR)
Procedia PDF Downloads 89939 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column
Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan
Abstract:
Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill
Procedia PDF Downloads 73938 Fight against Money Laundering with Optical Character Recognition
Authors: Saikiran Subbagari, Avinash Malladhi
Abstract:
Anti Money Laundering (AML) regulations are designed to prevent money laundering and terrorist financing activities worldwide. Financial institutions around the world are legally obligated to identify, assess and mitigate the risks associated with money laundering and report any suspicious transactions to governing authorities. With increasing volumes of data to analyze, financial institutions seek to automate their AML processes. In the rise of financial crimes, optical character recognition (OCR), in combination with machine learning (ML) algorithms, serves as a crucial tool for automating AML processes by extracting the data from documents and identifying suspicious transactions. In this paper, we examine the utilization of OCR for AML and delve into various OCR techniques employed in AML processes. These techniques encompass template-based, feature-based, neural network-based, natural language processing (NLP), hidden markov models (HMMs), conditional random fields (CRFs), binarizations, pattern matching and stroke width transform (SWT). We evaluate each technique, discussing their strengths and constraints. Also, we emphasize on how OCR can improve the accuracy of customer identity verification by comparing the extracted text with the office of foreign assets control (OFAC) watchlist. We will also discuss how OCR helps to overcome language barriers in AML compliance. We also address the implementation challenges that OCR-based AML systems may face and offer recommendations for financial institutions based on the data from previous research studies, which illustrate the effectiveness of OCR-based AML.Keywords: anti-money laundering, compliance, financial crimes, fraud detection, machine learning, optical character recognition
Procedia PDF Downloads 144937 Inclusive Education for Deaf and Hard-of-Hearing Students in China: Ideas, Practices, and Challenges
Authors: Xuan Zheng
Abstract:
China is home to one of the world’s largest Deaf and Hard of Hearing (DHH) populations. In the 1980s, the concept of inclusive education was introduced, giving rise to a unique “learning in regular class (随班就读)” model tailored to local contexts. China’s inclusive education for DHH students is diversifying with innovative models like special education classes at regular schools, regular classes at regular schools, resource classrooms, satellite classes, and bilingual-bimodal projects. The scope extends to preschool and higher education programs. However, the inclusive development of DHH students faces challenges. The prevailing pathological viewpoint on disabilities persists, emphasizing the necessity for favorable auditory and speech rehabilitation outcomes before DHH students can integrate into regular classes. In addition, inadequate support systems in inclusive schools result in poor academic performance and increased psychological disorders among the group, prompting a notable return to special education schools. Looking ahead, China’s inclusive education for DHH students needs a substantial shift from “learning in regular class” to “sharing equal regular education.” Particular attention should be devoted to the effective integration of DHH students who employ sign language into mainstream educational settings. It is crucial to strengthen regulatory frameworks and institutional safeguards, advance the professional development of educators specializing in inclusive education for DHH students, and consistently enhance resources tailored to this demographic. Furthermore, the establishment of a robust, multidimensional, and collaborative support network, engaging both families and educational institutions, is also a pivotal facet.Keywords: deaf, hard of hearing, inclusive education, China
Procedia PDF Downloads 52936 Understanding Tactical Urbanisms in Derelict Areas
Authors: Berna Yaylalı, Isin Can Traunmüller
Abstract:
This paper explores the emergent bottom-up practices in the fields of architecture and urban design within comparative perspectives of two cities. As a temporary, easily affordable intervention that gives the possibility of transforming neglected spaces into vibrant public spaces, tactical urbanism, together with creative place-making strategies, presents alternative ways of creating sustainable developments in derelict and underused areas. This study examines the potential of social and physical developments through a reading of case studies of two creative spatial practices: a pop-up garden transformed from an unused derelict space in Favoriten, Vienna, and an urban community garden in Kuzguncuk, Istanbul. Two cities are chosen according to their multicultural population and diversity. Istanbul was selected as a design city by UNESCO Creative Cities Network in 2017, and Vienna was declared an open and livable city by its local government. This research will use media archives and reports, interviews with locals and local governments, site observations, and visual recordings as methods to provide a critical reading on creative public spaces from the view of local users in these neighborhoods. Reflecting on these emergent ways, this study aims at discussing the production process of tactile urbanism with the practices of locals and the decision-making process with cases from İstanbul and Vienna. The comparison between their place-making strategies in tactical urbanism will give important insights for future developments.Keywords: creative city, tactical urbanism, neglected area, public space
Procedia PDF Downloads 103935 Economics Analysis of Chinese Social Media Platform Sina Weibo and E-Commerce Platform Taobao
Authors: Xingyue Yang
Abstract:
This study focused on Chinese social media stars and the relationship between their level of fame on the social media platform Sina Weibo and their sales revenue on the E-commerce platform Taobao/Tmall.com. This was viewed from the perspective of Adler’s superstardom theory and Rosen and MacDonald’s theories examining the economics of celebrities who build their audience using digital, rather than traditional platforms. Theory and empirical research support the assertion that stars of traditional media achieve popular success due to a combination of talent and market concentration, as well as a range of other factors. These factors are also generally considered relevant to the popularisation of social media stars. However, success across digital media platforms also involves other variables - for example, upload strategies, cross-platform promotions, which often have no direct corollary in traditional media. These factors were the focus of our study, which investigated the relationship between popularity, promotional strategy and sales revenue for 15 social media stars who specialised in culinary topics on the Chinese social media platform Sina Weibo. In 2019, these food bloggers made a total of 2076 Sina Weibo posts, and these were compiled alongside calculations made to determine each food blogger’s sales revenue on the eCommerce platforms Taobao/Tmall. Quantitative analysis was then performed on this data, which determined that certain upload strategies on Weibo - such as upload time, posting format and length of video - have an important impact on the success of sales revenue on Taobao/Tmall.com.Keywords: attention economics, digital media, network effect, social media stars
Procedia PDF Downloads 228934 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies
Authors: Masoud Sheidai
Abstract:
Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis
Procedia PDF Downloads 122933 Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing
Authors: Robin Molinier
Abstract:
Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.Keywords: capacity, cooperation, industrial symbiosis, pricing
Procedia PDF Downloads 211932 Immune Complex Components Act as Agents in Relapsing Fever Borrelia Mediated Rosette Formation
Authors: Mukunda Upreti, Jill Storry, Rafael Björk, Emilie Louvet, Johan Normark, Sven Bergström
Abstract:
Borrelia duttonii and most other relapsing fever species are Gram-negative bacteria which cause a blood borne infection characterized by the binding of bacterium to erythrocytes. The bacteria associate with two or more erythrocytes to form clusters of cells into rosettes. Rosetting is a major virulence factor and the mechanism is believed to facilitate persistence of bacteria in the circulatory system and the avoidance of host immune cells through masking or steric hindrance effects. However, the molecular mechanisms of rosette formation are still poorly understood. This study aims at determining the molecules involved in the rosette formation phenomenon. Fractionated serum, using different affinity purification methods, was investigated as a rosetting agent and IgG and at least one other serum components were needed for rosettes to form. An IgG titration curve demonstrated that IgG alone is not enough to restore rosette formation level to the level whole serum gives. IgG hydrolysis by IdeS ( Immunoglobulin G-degrading enzyme of Streptococcus pyogenes) and deglycosylation using N-Glycanase proved that the whole IgG molecule regardless of saccharide moieties is critical for Borrelia induced rosetting. Complement components C3 and C4 were also important serum molecules necessary to maintain optimum rosetting rates. The deactivation of complement network and serum depletion with C3 and C4 significantly reduced the rosette formation rate. The dependency of IgG and complement components also implied involvement of the complement receptor (CR1). Rosette formation test with Knops null RBC and sCR1 confirmed that CR1 is also part of Borrelia induced rosette formation.Keywords: complement components C3 and C4, complement receptor 1, Immunoglobulin G, Knops null, Rosetting
Procedia PDF Downloads 319931 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid
Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong
Abstract:
Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function
Procedia PDF Downloads 99930 Explainable Deep Learning for Neuroimaging: A Generalizable Approach for Differential Diagnosis of Brain Diseases
Authors: Nighat Bibi
Abstract:
The differential diagnosis of brain diseases by magnetic resonance imaging (MRI) is a crucial step in the diagnostic process, and deep learning (DL) has the potential to significantly improve the accuracy and efficiency of these diagnoses. This study focuses on creating an ensemble learning (EL) model that utilizes the ResNet50, DenseNet121, and EfficientNetB1 architectures to concurrently and accurately classify various brain conditions from MRI images. The proposed ensemble learning model identifies a range of brain disorders that encompass different types of brain tumors, as well as multiple sclerosis. The proposed model was trained on two open-source datasets, consisting of MRI images of glioma, meningioma, pituitary tumors, and multiple sclerosis. Central to this research is the integration of gradient-weighted class activation mapping (Grad-CAM) for model interpretability, aligning with the growing emphasis on explainable AI (XAI) in medical imaging. The application of Grad-CAM improves the transparency of the model's decision-making process, which is vital for clinical acceptance and trust in AI-assisted diagnostic tools. The EL model achieved an impressive 99.84% accuracy in classifying these various brain conditions, demonstrating its potential as a versatile and effective tool for differential diagnosis in neuroimaging. The model’s ability to distinguish between multiple brain diseases underscores its significant potential in the field of medical imaging. Additionally, Grad-CAM visualizations provide deeper insights into the neural network’s reasoning, contributing to a more transparent and interpretable AI-driven diagnostic process in neuroimaging.Keywords: brain tumour, differential diagnosis, ensemble learning, explainability, grad-cam, multiple sclerosis
Procedia PDF Downloads 6929 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 333928 A Challenge to Acquire Serious Victims’ Locations during Acute Period of Giant Disasters
Authors: Keiko Shimazu, Yasuhiro Maida, Tetsuya Sugata, Daisuke Tamakoshi, Kenji Makabe, Haruki Suzuki
Abstract:
In this paper, we report how to acquire serious victims’ locations in the Acute Stage of Large-scale Disasters, in an Emergency Information Network System designed by us. The background of our concept is based on the Great East Japan Earthquake occurred on March 11th, 2011. Through many experiences of national crises caused by earthquakes and tsunamis, we have established advanced communication systems and advanced disaster medical response systems. However, Japan was devastated by huge tsunamis swept a vast area of Tohoku causing a complete breakdown of all the infrastructures including telecommunications. Therefore, we noticed that we need interdisciplinary collaboration between science of disaster medicine, regional administrative sociology, satellite communication technology and systems engineering experts. Communication of emergency information was limited causing a serious delay in the initial rescue and medical operation. For the emergency rescue and medical operations, the most important thing is to identify the number of casualties, their locations and status and to dispatch doctors and rescue workers from multiple organizations. In the case of the Tohoku earthquake, the dispatching mechanism and/or decision support system did not exist to allocate the appropriate number of doctors and locate disaster victims. Even though the doctors and rescue workers from multiple government organizations have their own dedicated communication system, the systems are not interoperable.Keywords: crisis management, disaster mitigation, messing, MGRS, military grid reference system, satellite communication system
Procedia PDF Downloads 234927 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 18926 Future Sustainable Mobility for Colorado
Authors: Paolo Grazioli
Abstract:
In this paper, we present the main results achieved during an eight-week international design project on Colorado Future Sustainable Mobilitycarried out at Metropolitan State University of Denver. The project was born with the intention to seize the opportunity created by the Colorado government’s plan to promote e-bikes mobility by creating a large network of dedicated tracks. The project was supported by local entrepreneurs who offered financial and professional support. The main goal of the project was to engage design students with the skills to design a user-centered, original vehicle that would satisfy the unarticulated practical and emotional needs of “Gen Z” users by creating a fun, useful, and reliablelife companion that would helps users carry out their everyday tasks in a practical and enjoyable way. The project was carried out with the intention of proving the importance of the combination of creative methods with practical design methodologies towards the creation of an innovative yet immediately manufacturable product for a more sustainable future. The final results demonstrate the students' capability to create innovative and yet manufacturable products and, especially, their ability to create a new design paradigm for future sustainable mobility products. The design solutions explored n the project include collaborative learning and human-interaction design for future mobility. The findings of the research led students to the fabrication of two working prototypes that will be tested in Colorado and developed for manufacturing in the year 2024. The project showed that collaborative design and project-based teaching improve the quality of the outcome and can lead to the creation of real life, innovative products directly from the classroom to the market.Keywords: sustainable transportation design, interface design, collaborative design, user -centered design research, design prototyping
Procedia PDF Downloads 94925 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 75