Search results for: well data integration
21692 Tectonostratigraphic, Paleogeography and Amalgamation of Sumatra Terranes, Indonesia
Authors: Syahrir Andi Mangga, Ipranta
Abstract:
The geological, paleomagnetic, geochemical and geophysical Investigation in The Sumatra Region has yielded some new data, has stimulated a reassessment of stratigraphy, structure, tectonic evolution and which can show a Sumatra geodynamic model. Sumatra island has in the margin of southwest part of the Eurasia plate in the Sundaland cratonic block and occurred as the amalgamation of allochtonous microplates, continental fragments, Island arc and accrctionary by foreland complex which assembled prior to Tertiary. The allochtonous rocks (terranes), can be divided into 4 (four) Terranes with Paleozoic to Mesosoic in age, had different origin, lithology and are separated by a Suture as main fault with trending NW-SE. The terranes are: the Tigapuluh-Bohorok (East Sumatra block / Sibumasu block), Permo-Carboniferous in age and is characterized by the rock types formed in glacio-marine and was intruded by Late Triassic to Early Jurrasic granitics, occupied in the Eastern part of Sumatra, the paleomagnetic data shown 41° South. Tanjung Karang - Gunung Kasih Terrane, is composed of higher metamorphic rocks and supposed to be pre-Carboniferous in age, covered by Mesozoic sedimentary rocks and were intruded by granitic-dioritic rocks, occupied in the Southern part of Sumatra, the paleomagnetic data shown 19° North. The Kuantan-Duabelas Mountain (West Sumatra block) is occupied by metamorphic, sedimentary and volcanic rocks of Paleozoic - Mesozoic (Carboniferous - Triassic) in age, contains a Cathaysion fauna and flora and are intruded by the Mesozoic granitoid rocks. The terrane occurred in the western part of Sumatra. Meanwhile, the Gumai-Garba (Waloya Terrane) which is occupied by the tectonite/melange, metasediment, carbonate and volcanic rocks of Mesozoic (Jurassic - Cretaceous) in age, are intruted by the Late Cretaceous granitoid rocks, the paleomagnetic data shown 30° - 31° South.Keywords: tectonostratigraphy, amalgamation, allochtonous, terranes, sumatra
Procedia PDF Downloads 34521691 Organic Geochemical Characteristics of Cenozoic Mudstones, NE Bengal Basin, Bangladesh
Authors: H. M. Zakir Hossain
Abstract:
Cenozoic mudstone samples, obtained from drilled cored and outcrop in northeastern Bengal Basin of Bangladesh were organic geochemically analyzed to identify vertical variations of organic facies, thermal maturity, hydrocarbon potential and depositional environments. Total organic carbon (TOC) content ranges from 0.11 to 1.56 wt% with an average of 0.43 wt%, indicating a good source rock potential. Total sulphur content is variable with values ranging from ~0.001 to 1.75 wt% with an average of 0.065 wt%. Rock-Eval S1 and S2 yields range from 0.03 to 0.14 mg HC/g rock and 0.01 to 0.66 mg HC/g rock, respectively. The hydrogen index values range from 2.71 to 56.09 mg HC/g TOC. These results revealed that the samples are dominated by type III kerogene. Tmax values of 426 to 453 °C and vitrinite reflectance of 0.51 to 0.66% indicate the organic matter is immature to mature. Saturated hydrocarbon ratios such as pristane, phytane, steranes, and hopanes, indicate mostly terrigenous organic matter with small influence of marine organic matter. Organic matter in the succession was accumulated in three different environmental conditions based on the integration of biomarker proxies. First phase (late Eocene to early Miocene): Deposition occurred entirely in seawater-dominated oxic conditions, with high inputs of land plants organic matter including angiosperms. Second phase (middle to late Miocene): Deposition occurred in freshwater-dominated anoxic conditions, with phytoplanktonic organic matter and a small influence of land plants. Third phase (late Miocene to Pleistocene): Deposition occurred in oxygen-poor freshwater conditions, with abundant input of planktonic organic matter and high influx of angiosperms. The lower part (middle Eocene to early Miocene) of the succession with moderate TOC contents and primarily terrestrial organic matter could have generated some condensates and oils in and around the study area.Keywords: Bangladesh, geochemistry, hydrocarbon potential, mudstone
Procedia PDF Downloads 42221690 The Water-Way Route Management for Cultural Tourism Promotion at Angsila District: Challenge and Opportunity
Authors: Teera Intararuang
Abstract:
The purpose of this research is to study on the challenge and opportunity for waterway route management for promoting cultural tourism in Angsila District, Chonburi Province. To accomplish the goals and objectives, qualitative research will be applied. The research instruments used are observation, basic interviews, in-depth interviews, and interview key local performance. The study also uses both primary data and secondary data. From research result, it is revealed that all respondents had appreciated and strongly agree to promote their waterway route tourism as an intend for further increase for their income. However, it has some challenges to success this project due to natural obstacles such as water level, seasons and high temperature. Moreover, they lack financial support from government sectors also.Keywords: Angsila community, waterway tourism route, cultural tourism, way of life
Procedia PDF Downloads 24821689 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 8221688 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique
Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian
Abstract:
Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction
Procedia PDF Downloads 8021687 Lightweight Synergy IoT Framework for Smart Home Healthcare for the Elderly
Authors: Huawei Ma, Wencai Du, Shengbin Liang
Abstract:
Smart Home Healthcare technologies for the elderly represent a transformative paradigm that leverages emerging technologies to provide the elderly’ health indicators and daily life monitoring, emergency calls, environmental monitoring, behavior perception, and other services to ensure the health and safety of the elderly who are aging in their own home. However, the excessive complexity in the main adopted framework has affected the acceptance and adoption of the elderly. Therefore, this paper proposes a lightweight synergy architecture of IoT data and service for elderly home smart health environment. It includes the modeling of IoT applications and their workflows, data interoperability, interaction, and storage paradigms to meet the growing needs of older people so that they can lead an active, fulfilling, and quality life.Keywords: smart home healthcare, IoT, independent living, lightweight framework
Procedia PDF Downloads 5321686 Role of Information and Communication Technology in Pharmaceutical Innovation: Case of Firms in Developing Countries
Authors: Ilham Benali, Nasser Hajji, Nawfel Acha
Abstract:
The pharmaceutical sector is ongoing different constraints related to the Research and Development (R&D) costs, the patents extinction, the demand pressing, the regulatory requirement and the generics development, which drive leading firms in the sector to undergo technological change and to shift to biotechnological paradigm. Based on a large literature review, we present a background of innovation trajectory in pharmaceutical industry and reasons behind this technological transformation. Then we investigate the role that Information and Communication Technology (ICT) is playing in this revolution. In order to situate pharmaceutical firms in developing countries in this trajectory, and to examine the degree of their involvement in the innovation process, we did not find any previous empirical work or sources generating gathered data that allow us to analyze this phenomenon. Therefore, and for the case of Morocco, we tried to do it from scratch by gathering relevant data of the last five years from different sources. As a result, only about 4% of all innovative drugs that have access to the local market in the mentioned period are made locally which substantiates that the industrial model in pharmaceutical sector in developing countries is based on the 'license model'. Finally, we present another alternative, based on ICT use and big data tools that can allow developing countries to shift from status of simple consumers to active actors in the innovation process.Keywords: biotechnologies, developing countries, innovation, information and communication technology, pharmaceutical firms
Procedia PDF Downloads 15121685 Factors Associated with Increase of Diabetic Foot Ulcers in Diabetic Patients in Nyahururu County Hospital
Authors: Daniel Wachira
Abstract:
The study aims to determine factors contributing to increasing rates of DFU among DM patients attending clinics in Nyahururu County referral hospital, Lakipia County. The study objectives include;- To determine the demographic factors contributing to increased rates of DFU among DM patients, determining the sociocultural factors that contribute to increased rates of DFU among DM patients and determining the health facility factors contributing to increased rates of DFU among DM patients attending DM clinic at Nyahururu county referral hospital, Laikipia County. This study will adopt a descriptive cross-sectional study design. It involves the collection of data at a one-time point without follow-up. This method is fast and inexpensive, there is no loss to follow up as the data is collected at one time point and associations between variables can be determined. The study population includes all DM patients with or without DFU. The sampling technique that will be used is the probability sampling method, a simple random method of sampling will be used. The study will employ the use of questionnaires to collect the required information. Questionnaires will be a research administered questionnaires. The questionnaire developed was done in consultation with other research experts (supervisor) to ensure reliability. The questionnaire designed will be pre-tested by hand delivering them to a sample 10% of the sample size at J.M Kariuki Memorial hospital, Nyandarua county and thereafter collecting them dully filled followed by refining of errors to ensure it is valid for collection of data relevant for this study. Refining of errors on the questionnaires to ensure it was valid for collection of data relevant for this study. Data collection will begin after the approval of the project. Questionnaires will be administered only to the participants who met the selection criteria by the researcher and those who agreed to participate in the study to collect key information with regard to the objectives of the study. The study's authority will be obtained from the National Commission of Science and Technology and Innovation. Permission will also be obtained from the Nyahururu County referral hospital administration staff. The purpose of the study will be explained to the respondents in order to secure informed consent, and no names will be written on the questionnaires. All the information will be treated with maximum confidentiality by not disclosing who the respondent was and the information.Keywords: diabetes, foot ulcer, social factors, hospital factors
Procedia PDF Downloads 1721684 Finite Time Blow-Up and Global Solutions for a Semilinear Parabolic Equation with Linear Dynamical Boundary Conditions
Authors: Xu Runzhang, Yang Yanbing, Niu Yi, Zhang Mingyou, Liu Yu
Abstract:
For a class of semilinear parabolic equations with linear dynamical boundary conditions in a bounded domain, we obtain both global solutions and finite time blow-up solutions when the initial data varies in the phase space H1(Ω). Our main tools are the comparison principle, the potential well method and the concavity method. In particular, we discuss the behavior of the solutions with the initial data at critical and high energy level.Keywords: high energy level, critical energy level, linear dynamical boundary condition, semilinear parabolic equation
Procedia PDF Downloads 43621683 An EWMA P-Chart Based on Improved Square Root Transformation
Authors: Saowanit Sukparungsee
Abstract:
Generally, the traditional Shewhart p chart has been developed by for charting the binomial data. This chart has been developed using the normal approximation with condition as low defect level and the small to moderate sample size. In real applications, however, are away from these assumptions due to skewness in the exact distribution. In this paper, a modified Exponentially Weighted Moving Average (EWMA) control chat for detecting a change in binomial data by improving square root transformations, namely ISRT p EWMA control chart. The numerical results show that ISRT p EWMA chart is superior to ISRT p chart for small to moderate shifts, otherwise, the latter is better for large shifts.Keywords: number of defects, exponentially weighted moving average, average run length, square root transformations
Procedia PDF Downloads 44021682 Coordination of Traffic Signals on Arterial Streets in Duhok City
Authors: Dilshad Ali Mohammed, Ziyad Nayef Shamsulddin Aldoski, Millet Salim Mohammed
Abstract:
The increase in levels of traffic congestion along urban signalized arterials needs efficient traffic management. The application of traffic signal coordination can improve the traffic operation and safety for a series of signalized intersection along the arterials. The objective of this study is to evaluate the benefits achievable through actuated traffic signal coordination and make a comparison in control delay against the same signalized intersection in case of being isolated. To accomplish this purpose, a series of eight signalized intersections located on two major arterials in Duhok City was chosen for conducting the study. Traffic data (traffic volumes, link and approach speeds, and passenger car equivalent) were collected at peak hours. Various methods had been used for collecting data such as video recording technique, moving vehicle method and manual methods. Geometric and signalization data were also collected for the purpose of the study. The coupling index had been calculated to check the coordination attainability, and then time space diagrams were constructed representing one-way coordination for the intersections on Barzani and Zakho Streets, and others represented two-way coordination for the intersections on Zakho Street with accepted progression bandwidth efficiency. The results of this study show great progression bandwidth of 54 seconds for east direction coordination and 17 seconds for west direction coordination on Barzani Street under suggested controlled speed of 60 kph agreeable with the present data. For Zakho Street, the progression bandwidth is 19 seconds for east direction coordination and 18 seconds for west direction coordination under suggested controlled speed of 40 kph. The results show that traffic signal coordination had led to high reduction in intersection control delays on both arterials.Keywords: bandwidth, congestion, coordination, traffic, signals, streets
Procedia PDF Downloads 30721681 Process Optimization for Albanian Crude Oil Characterization
Authors: Xhaklina Cani, Ilirjan Malollari, Ismet Beqiraj, Lorina Lici
Abstract:
Oil characterization is an essential step in the design, simulation, and optimization of refining facilities. To achieve optimal crude selection and processing decisions, a refiner must have exact information refer to crude oil quality. This includes crude oil TBP-curve as the main data for correct operation of refinery crude oil atmospheric distillation plants. Crude oil is typically characterized based on a distillation assay. This procedure is reasonably well-defined and is based on the representation of the mixture of actual components that boil within a boiling point interval by hypothetical components that boil at the average boiling temperature of the interval. The crude oil assay typically includes TBP distillation according to ASTM D-2892, which can characterize this part of oil that boils up to 400 C atmospheric equivalent boiling point. To model the yield curves obtained by physical distillation is necessary to compare the differences between the modelling and the experimental data. Most commercial use a different number of components and pseudo-components to represent crude oil. Laboratory tests include distillations, vapor pressures, flash points, pour points, cetane numbers, octane numbers, densities, and viscosities. The aim of the study is the drawing of true boiling curves for different crude oil resources in Albania and to compare the differences between the modeling and the experimental data for optimal characterization of crude oil.Keywords: TBP distillation curves, crude oil, optimization, simulation
Procedia PDF Downloads 30421680 Design of a Photovoltaic Power Generation System Based on Artificial Intelligence and Internet of Things
Authors: Wei Hu, Wenguang Chen, Chong Dong
Abstract:
In order to improve the efficiency and safety of photovoltaic power generation devices, this photovoltaic power generation system combines Artificial Intelligence (AI) and the Internet of Things (IoT) to control the chasing photovoltaic power generation device to track the sun to improve power generation efficiency and then convert energy management. The system uses artificial intelligence as the control terminal, the power generation device executive end uses the Linux system, and Exynos4412 is the CPU. The power generating device collects the sun image information through Sony CCD. After several power generating devices feedback the data to the CPU for processing, several CPUs send the data to the artificial intelligence control terminal through the Internet. The control terminal integrates the executive terminal information, time information, and environmental information to decide whether to generate electricity normally and then whether to convert the converted electrical energy into the grid or store it in the battery pack. When the power generation environment is abnormal, the control terminal authorizes the protection strategy, the power generation device executive terminal stops power generation and enters a self-protection posture, and at the same time, the control terminal synchronizes the data with the cloud. At the same time, the system is more intelligent, more adaptive, and longer life.Keywords: photo-voltaic power generation, the pursuit of light, artificial intelligence, internet of things, photovoltaic array, power management
Procedia PDF Downloads 12321679 Sustainable Membranes Based on 2D Materials for H₂ Separation and Purification
Authors: Juan A. G. Carrio, Prasad Talluri, Sergio G. Echeverrigaray, Antonio H. Castro Neto
Abstract:
Hydrogen as a fuel and environmentally pleasant energy carrier is part of this transition towards low-carbon systems. The extensive deployment of hydrogen production, purification and transport infrastructures still represents significant challenges. Independent of the production process, the hydrogen generally is mixed with light hydrocarbons and other undesirable gases that need to be removed to obtain H₂ with the required purity for end applications. In this context, membranes are one of the simplest, most attractive, sustainable, and performant technologies enabling hydrogen separation and purification. They demonstrate high separation efficiencies and low energy consumption levels in operation, which is a significant leap compared to current energy-intensive options technologies. The unique characteristics of 2D laminates have given rise to a diversity of research on their potential applications in separation systems. Specifically, it is already known in the scientific literature that graphene oxide-based membranes present the highest reported selectivity of H₂ over other gases. This work explores the potential of a new type of 2D materials-based membranes in separating H₂ from CO₂ and CH₄. We have developed nanostructured composites based on 2D materials that have been applied in the fabrication of membranes to maximise H₂ selectivity and permeability, for different gas mixtures, by adjusting the membranes' characteristics. Our proprietary technology does not depend on specific porous substrates, which allows its integration in diverse separation modules with different geometries and configurations, looking to address the technical performance required for industrial applications and economic viability. The tuning and precise control of the processing parameters allowed us to control the thicknesses of the membranes below 100 nanometres to provide high permeabilities. Our results for the selectivity of new nanostructured 2D materials-based membranes are in the range of the performance reported in the available literature around 2D materials (such as graphene oxide) applied to hydrogen purification, which validates their use as one of the most promising next-generation hydrogen separation and purification solutions.Keywords: membranes, 2D materials, hydrogen purification, nanocomposites
Procedia PDF Downloads 13421678 Nonlinear Evolution on Graphs
Authors: Benniche Omar
Abstract:
We are concerned with abstract fully nonlinear differential equations having the form y’(t)=Ay(t)+f(t,y(t)) where A is an m—dissipative operator (possibly multi—valued) defined on a subset D(A) of a Banach space X with values in X and f is a given function defined on I×X with values in X. We consider a graph K in I×X. We recall that K is said to be viable with respect to the above abstract differential equation if for each initial data in K there exists at least one trajectory starting from that initial data and remaining in K at least for a short time. The viability problem has been studied by many authors by using various techniques and frames. If K is closed, it is shown that a tangency condition, which is mainly linked to the dynamic, is crucial for viability. In the case when X is infinite dimensional, compactness and convexity assumptions are needed. In this paper, we are concerned with the notion of near viability for a given graph K with respect to y’(t)=Ay(t)+f(t,y(t)). Roughly speaking, the graph K is said to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)), if for each initial data in K there exists at least one trajectory remaining arbitrary close to K at least for short time. It is interesting to note that the near viability is equivalent to an appropriate tangency condition under mild assumptions on the dynamic. Adding natural convexity and compactness assumptions on the dynamic, we may recover the (exact) viability. Here we investigate near viability for a graph K in I×X with respect to y’(t)=Ay(t)+f(t,y(t)) where A and f are as above. We emphasis that the t—dependence on the perturbation f leads us to introduce a new tangency concept. In the base of a tangency conditions expressed in terms of that tangency concept, we formulate criteria for K to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)). As application, an abstract null—controllability theorem is given.Keywords: abstract differential equation, graph, tangency condition, viability
Procedia PDF Downloads 14521677 Implementation Status of Industrial Training for Production Engineering Technology Diploma Inuniversity Kuala Lumpur Malaysia Spanish Institute (Unikl Msi)
Authors: M. Sazali Said, Rahim Jamian, Shahrizan Yusoff, Shahruzaman Sulaiman, Jum'Azulhisham Abdul Shukor
Abstract:
This case study focuses on the role of Universiti Kuala Lumpur Malaysian Spanish Institute (UniKL MSI) to produce technologist in order to reduce the shortage of skilled workers especially in the automotive industry. The purpose of the study therefore seeks to examine the effectiveness of Technical Education and Vocational Training (TEVT) curriculum of UniKL MSI to produce graduates that could immediately be productively employed by the automotive industry. The approach used in this study is through performance evaluation of students attending the Industrial Training Attachment (INTRA). The sample of study comprises of 37 students, 16 university supervisors and 26 industrial supervisors. The research methodology involves the use of quantitative and qualitative methods of data collections through the triangulation approach. The quantitative data was gathered from the students, university supervisors and industrial supervisors through the use of questionnaire. Meanwhile, the qualitative data was obtained from the students and university supervisors through the use of interview and observation. Both types of data have been processed and analyzed in order to summarize the results in terms of frequency and percentage by using a computerized spread sheet. The result shows that industrial supervisors were satisfied with the students’ performance. Meanwhile, university supervisors rated moderate effectiveness of the UniKL MSI curriculum in producing graduates with appropriate skills and in meeting the industrial needs. During the period of study, several weaknesses in the curriculum have been identified for further continuous improvements. Recommendations and suggestions for curriculum improvement also include the enhancement of technical skills and competences of students towards fulfilling the needs and demand of the automotive industries.Keywords: technical education and vocational training (TEVT), industrial training attachment (INTRA), curriculum improvement, automotive industry
Procedia PDF Downloads 36821676 A Longitudinal Survey Study of Izmir Commuter Rail System (IZBAN)
Authors: Samet Sen, Yalcin Alver
Abstract:
Before Izmir Commuter Rail System (IZBAN), most of the respondents along the railway were making their trips by city buses, minibuses or private cars. After IZBAN was put into service, some people changed their previous trip behaviors and they started travelling by IZBAN. Therefore a big travel demand in IZBAN occurred. In this study, the characteristics of passengers and their trip behaviors are found out based on the longitudinal data conducted via two wave trip surveys. Just after one year from IZBAN's opening, the first wave of the surveys was carried out among 539 passengers at six stations during morning peak hours between 07.00 am-09.30 am. The second wave was carried out among 669 passengers at the same six stations two years after the first wave during the same morning peak hours. As a result of this study, the respondents' socio-economic specifications, the distribution of trips by region, the impact of IZBAN on transport modes, the changes in travel time and travel cost and satisfaction data were obtained. These data enabled to compare two waves and explain the changes in socio-economic factors and trip behaviors. In both waves, 10 % of the respondents stopped driving their own cars and they started to take IZBAN. This is an important development in solving traffic problems. More public transportation means less traffic congestion.Keywords: commuter rail system, comparative study, longitudinal survey, public transportation
Procedia PDF Downloads 43521675 Happiness of Thai People: An Analysis by Socioeconomic Factors
Authors: Kalayanee Senasu
Abstract:
This research investigates Thai people’s happiness based on socioeconomic factors, i.e. region, municipality, gender, age, and occupation. The research data were collected from survey data using interviewed questionnaires. The primary data were from stratified multi-stage sampling in each region, province, district, and enumeration area; and simple random sampling in each enumeration area. These data were collected in 13 provinces: Bangkok and three provinces in each of all four regions. The data were collected over two consecutive years. There were 3,217 usable responses from the 2017 sampling, and 3,280 usable responses from the 2018 sampling. The Senasu’s Thai Happiness Index (THaI) was used to calculate the happiness level of Thai people in 2017 and 2018. This Thai Happiness Index comprises five dimensions: subjective well-being, quality of life, philosophy of living, governance, and standard of living. The result reveals that the 2017 happiness value is 0.506, while Thai people are happier in 2018 (THaI = 0.556). For 2017 happiness, people in the Central region have the highest happiness (THaI = 0.532), which is followed closely by people in the Bangkok Metropolitan Area (THaI = 0.530). People in the North have the lowest happiness (THaI = 0.476) which is close to the level for people in the Northeast (THaI = 0.479). Comparing age groups, it is found that people in the age range 25-29 years old are the happiest (THaI = 0.529), followed by people in the age range 55-59 and 35-39 years old (THaI = 0.526 and 0.523, respectively). Additionally, people who live in municipal areas are happier than those who live in non-municipal areas (THaI = 0.533 vs. 0.475). Males are happier than females (THaI = 0.530 vs. 0.482), and retired people, entrepreneurs, and government employees are all in the high happiness groups (THaI =0.614, 0.608, and 0.593, respectively). For 2018 happiness, people in the Northern region have the highest happiness (THaI = 0.590), which is followed closely by people in the South and Bangkok Metropolitan Area (THaI = 0.578 and 0.577, respectively). People in the Central have the lowest happiness (THaI = 0.530), which is close to the level for people in the Northeast (THaI = 0.533). Comparing age groups, it is found that people in the age range 35-39 years old are the happiest (THaI = 0.572), followed by people in the age range 40-44 and 60-64 years old (THaI = 0.569 and 0.568, respectively). Similar to 2017 happiness, people who live in municipal areas are happier than those who live in non-municipal areas (THaI = 0.567 vs. 0. 552). However, males and females are happy at about the same levels (THaI = 0.561 vs. 0.560), and government employees, retired people, and state enterprise employees are all in the high happiness groups (THaI =0.667, 0.639, and 0.661, respectively).Keywords: happiness, quality of life, Thai happiness index, socio-economic factors
Procedia PDF Downloads 11421674 Mathematical Study for Traffic Flow and Traffic Density in Kigali Roads
Authors: Kayijuka Idrissa
Abstract:
This work investigates a mathematical study for traffic flow and traffic density in Kigali city roads and the data collected from the national police of Rwanda in 2012. While working on this topic, some mathematical models were used in order to analyze and compare traffic variables. This work has been carried out on Kigali roads specifically at roundabouts from Kigali Business Center (KBC) to Prince House as our study sites. In this project, we used some mathematical tools to analyze the data collected and to understand the relationship between traffic variables. We applied the Poisson distribution method to analyze and to know the number of accidents occurred in this section of the road which is from KBC to Prince House. The results show that the accidents that occurred in 2012 were at very high rates due to the fact that this section has a very narrow single lane on each side which leads to high congestion of vehicles, and consequently, accidents occur very frequently. Using the data of speeds and densities collected from this section of road, we found that the increment of the density results in a decrement of the speed of the vehicle. At the point where the density is equal to the jam density the speed becomes zero. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized in a series of intelligent management strategies and especially in noncurrent congestion effect detection and control.Keywords: statistical methods, traffic flow, Poisson distribution, car moving technics
Procedia PDF Downloads 28221673 Women Entrepreneurs in Health Care: An Exploratory Study
Authors: Priya Nambisan, Lien B. Nguyen
Abstract:
Women participate extensively in the healthcare field, professionally (as physicians, nurses, dietitians, etc.) as well as informally (as caregivers at home). This provides them with a better understanding of the health needs of people. Women are also in the forefront of using social media and other mobile health related apps. Further, many health mobile apps are specifically designed for women users. All of these indicate the potential for women to be successful entrepreneurs in healthcare, especially, in the area of mobile health app development. However, extant research in entrepreneurship has paid limited attention to women entrepreneurship in healthcare. The objective of this study is to determine the key factors that shape the intentions and actions of women entrepreneurs with regard to their entrepreneurial pursuits in the healthcare field. Specifically, the study advances several hypotheses that relate key variables such as personal skills and capabilities, experience, support from institutions and family, and perceptions regarding entrepreneurship to individual intentions and actions regarding entrepreneurship (specifically, in the area of mobile apps). The study research model will be validated using survey data collected from potential women entrepreneurs in the healthcare field – students in the area of health informatics and engineering. The questionnaire-based survey relates to woman respondents’ intention to become entrepreneurs in healthcare and the key factors (independent variables) that may facilitate or inhibit their entrepreneurial intentions and pursuits. The survey data collection is currently ongoing. We also plan to conduct semi-structured interviews with around 10-15 women entrepreneurs who are currently developing mobile apps to understand the key issues and challenges that they face in this area. This is an exploratory study and as such our goal is to combine the findings from the regression analysis of the survey data and that from the content analysis of the interview data to inform on future research on women entrepreneurship in healthcare. The study findings will hold important policy implications, specifically for the development of new programs and initiatives to promote women entrepreneurship, particularly in healthcare and technology areas.Keywords: women entrepreneurship, healthcare, mobile apps, health apps
Procedia PDF Downloads 45221672 American Sign Language Recognition System
Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba
Abstract:
The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.Keywords: sign language, computer vision, vision transformer, VGG16, CNN
Procedia PDF Downloads 4321671 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 12121670 GPS Refinement in Cities Using Statistical Approach
Authors: Ashwani Kumar
Abstract:
GPS plays an important role in everyday life for safe and convenient transportation. While pedestrians use hand held devices to know their position in a city, vehicles in intelligent transport systems use relatively sophisticated GPS receivers for estimating their current position. However, in urban areas where the GPS satellites are occluded by tall buildings, trees and reflections of GPS signals from nearby vehicles, GPS position estimation becomes poor. In this work, an exhaustive GPS data is collected at a single point in urban area under different times of day and under dynamic environmental conditions. The data is analyzed and statistical refinement methods are used to obtain optimal position estimate among all the measured positions. The results obtained are compared with publically available datasets and obtained position estimation refinement results are promising.Keywords: global positioning system, statistical approach, intelligent transport systems, least squares estimation
Procedia PDF Downloads 28821669 Impact of Urbanization on the Performance of Higher Education Institutions
Authors: Chandan Jha, Amit Sachan, Arnab Adhikari, Sayantan Kundu
Abstract:
The purpose of this study is to evaluate the performance of Higher Education Institutions (HEIs) of India and examine the impact of urbanization on the performance of HEIs. In this study, the Data Envelopment Analysis (DEA) has been used, and the authors have collected the required data related to performance measures from the National Institutional Ranking Framework web portal. In this study, the authors have evaluated the performance of HEIs by using two different DEA models. In the first model, geographic locations of the institutes have been categorized into two categories, i.e., Urban Vs. Non-Urban. However, in the second model, these geographic locations have been classified into three categories, i.e., Urban, Semi-Urban, Non-Urban. The findings of this study provide several insights related to the degree of urbanization and the performance of HEIs.Keywords: DEA, higher education, performance evaluation, urbanization
Procedia PDF Downloads 21521668 Image Based Landing Solutions for Large Passenger Aircraft
Authors: Thierry Sammour Sawaya, Heikki Deschacht
Abstract:
In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing
Procedia PDF Downloads 10121667 Experiments to Study the Vapor Bubble Dynamics in Nucleate Pool Boiling
Authors: Parul Goel, Jyeshtharaj B. Joshi, Arun K. Nayak
Abstract:
Nucleate boiling is characterized by the nucleation, growth and departure of the tiny individual vapor bubbles that originate in the cavities or imperfections present in the heating surface. It finds a wide range of applications, e.g. in heat exchangers or steam generators, core cooling in power reactors or rockets, cooling of electronic circuits, owing to its highly efficient transfer of large amount of heat flux over small temperature differences. Hence, it is important to be able to predict the rate of heat transfer and the safety limit heat flux (critical heat flux, heat flux higher than this can lead to damage of the heating surface) applicable for any given system. A large number of experimental and analytical works exist in the literature, and are based on the idea that the knowledge of the bubble dynamics on the microscopic scale can lead to the understanding of the full picture of the boiling heat transfer. However, the existing data in the literature are scattered over various sets of conditions and often in disagreement with each other. The correlations obtained from such data are also limited to the range of conditions they were established for and no single correlation is applicable over a wide range of parameters. More recently, a number of researchers have been trying to remove empiricism in the heat transfer models to arrive at more phenomenological models using extensive numerical simulations; these models require state-of-the-art experimental data for a wide range of conditions, first for input and later, for their validation. With this idea in mind, experiments with sub-cooled and saturated demineralized water have been carried out under atmospheric pressure to study the bubble dynamics- growth rate, departure size and frequencies for nucleate pool boiling. A number of heating elements have been used to study the dependence of vapor bubble dynamics on the heater surface finish and heater geometry along with the experimental conditions like the degree of sub-cooling, super heat and the heat flux. An attempt has been made to compare the data obtained with the existing data and the correlations in the literature to generate an exhaustive database for the pool boiling conditions.Keywords: experiment, boiling, bubbles, bubble dynamics, pool boiling
Procedia PDF Downloads 30221666 The Digitalization of Occupational Health and Safety Training: A Fourth Industrial Revolution Perspective
Authors: Deonie Botha
Abstract:
Digital transformation and the digitization of occupational health and safety training have grown exponentially due to a variety of contributing factors. The literature suggests that digitalization has numerous benefits but also has associated challenges. The aim of the paper is to develop an understanding of both the perceived benefits and challenges of digitalization in an occupational health and safety context in an effort to design and develop e-learning interventions that will optimize the benefits of digitalization and address the associated challenges. The paper proposes, deliberate and tests the design principles of an e-learning intervention to ensure alignment with the requirements of a digitally transformed environment. The results of the research are based on a literature review regarding the requirements and effect of the Fourth Industrial Revolution on learning and e-learning in particular. The findings of the literature review are enhanced with empirical research in the form of a case study conducted in an organization that designs and develops e-learning content in the occupational health and safety industry. The primary findings of the research indicated that: (i) The requirements of learners and organizations in respect of e-learning are different than previously (i.e., a pre-Fourth Industrial Revolution related work setting). (ii) The design principles of an e-learning intervention need to be aligned with the entire value chain of the organization. (iii) Digital twins support and enhance the design and development of e-learning. (iv)Learning should incorporate a multitude of sensory experiences and should not only be based on visual stimulation. (v) Data that are generated as a result of e-learning interventions should be incorporated into big data streams to be analyzed and to become actionable. It is therefore concluded that there is general consensus on the requirements that e-learning interventions need to adhere to in a digitally transformed occupational health and safety work environment. The challenge remains for organizations to incorporate data generated as a result of e-learning interventions into the digital ecosystem of the organization.Keywords: digitalization, training, fourth industrial revolution, big data
Procedia PDF Downloads 15621665 Non-Parametric, Unconditional Quantile Estimation of Efficiency in Microfinance Institutions
Authors: Komlan Sedzro
Abstract:
We apply the non-parametric, unconditional, hyperbolic order-α quantile estimator to appraise the relative efficiency of Microfinance Institutions in Africa in terms of outreach. Our purpose is to verify if these institutions, which must constantly try to strike a compromise between their social role and financial sustainability are operationally efficient. Using data on African MFIs extracted from the Microfinance Information eXchange (MIX) database and covering the 2004 to 2006 periods, we find that more efficient MFIs are also the most profitable. This result is in line with the view that social performance is not in contradiction with the pursuit of excellent financial performance. Our results also show that large MFIs in terms of asset and those charging the highest fees are not necessarily the most efficient.Keywords: data envelopment analysis, microfinance institutions, quantile estimation of efficiency, social and financial performance
Procedia PDF Downloads 31121664 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology
Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani
Abstract:
Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography
Procedia PDF Downloads 42421663 Whole Exome Sequencing Data Analysis of Rare Diseases: Non-Coding Variants and Copy Number Variations
Authors: S. Fahiminiya, J. Nadaf, F. Rauch, L. Jerome-Majewska, J. Majewski
Abstract:
Background: Sequencing of protein coding regions of human genome (Whole Exome Sequencing; WES), has demonstrated a great success in the identification of causal mutations for several rare genetic disorders in human. Generally, most of WES studies have focused on rare variants in coding exons and splicing-sites where missense substitutions lead to the alternation of protein product. Although focusing on this category of variants has revealed the mystery behind many inherited genetic diseases in recent years, a subset of them remained still inconclusive. Here, we present the result of our WES studies where analyzing only rare variants in coding regions was not conclusive but further investigation revealed the involvement of non-coding variants and copy number variations (CNV) in etiology of the diseases. Methods: Whole exome sequencing was performed using our standard protocols at Genome Quebec Innovation Center, Montreal, Canada. All bioinformatics analyses were done using in-house WES pipeline. Results: To date, we successfully identified several disease causing mutations within gene coding regions (e.g. SCARF2: Van den Ende-Gupta syndrome and SNAP29: 22q11.2 deletion syndrome) by using WES. In addition, we showed that variants in non-coding regions and CNV have also important value and should not be ignored and/or filtered out along the way of bioinformatics analysis on WES data. For instance, in patients with osteogenesis imperfecta type V and in patients with glucocorticoid deficiency, we identified variants in 5'UTR, resulting in the production of longer or truncating non-functional proteins. Furthermore, CNVs were identified as the main cause of the diseases in patients with metaphyseal dysplasia with maxillary hypoplasia and brachydactyly and in patients with osteogenesis imperfecta type VII. Conclusions: Our study highlights the importance of considering non-coding variants and CNVs during interpretation of WES data, as they can be the only cause of disease under investigation.Keywords: whole exome sequencing data, non-coding variants, copy number variations, rare diseases
Procedia PDF Downloads 419