Search results for: Fuzzy Analytical Network Process (FANP)
12693 The Current Status of Middle Class Internet Use in China: An Analysis Based on the Chinese General Social Survey 2015 Data and Semi-Structured Investigation
Authors: Abigail Qian Zhou
Abstract:
In today's China, the well-educated middle class, with stable jobs and above-average income, are the driving force behind its Internet society. Through the analysis of data from the 2015 Chinese General Social Survey and 50 interviewees, this study investigates the current situation of this group’s specific internet usage. The findings of this study demonstrate that daily life among the members of this socioeconomic group is closely tied to the Internet. For Chinese middle class, the Internet is used to socialize and entertain self and others. It is also used to search for and share information as well as to build their identities. The empirical results of this study will provide a reference, supported by factual data, for enterprises seeking to target the Chinese middle class through online marketing efforts.Keywords: middle class, Internet use, network behaviour, online marketing, China
Procedia PDF Downloads 12212692 Catastrophic Health Expenditures: Evaluating the Effectiveness of Nepal's National Health Insurance Program Using Propensity Score Matching and Doubly Robust Methodology
Authors: Simrin Kafle, Ulrika Enemark
Abstract:
Catastrophic health expenditure (CHE) is a critical issue in low- and middle-income countries like Nepal, exacerbating financial hardship among vulnerable households. This study assesses the effectiveness of Nepal’s National Health Insurance Program (NHIP), launched in 2015, to reduce out-of-pocket (OOP) healthcare costs and mitigate CHE. Conducted in Pokhara Metropolitan City, the study used an analytical cross-sectional design, sampling 1276 households through a two-stage random sampling method. Data was collected via face-to-face interviews between May and October 2023. The analysis was conducted using SPSS version 29, incorporating propensity score matching to minimize biases and create comparable groups of enrolled and non-enrolled households in the NHIP. PSM helped reduce confounding effects by matching households with similar baseline characteristics. Additionally, a doubly robust methodology was employed, combining propensity score adjustment with regression modeling to enhance the reliability of the results. This comprehensive approach ensured a more accurate estimation of the impact of NHIP enrollment on CHE. Among the 1276 samples, 534 households (41.8%) were enrolled in NHIP. Of them, 84.3% of households renewed their insurance card, though some cited long waiting times, lack of medications, and complex procedures as barriers to renewal. Approximately 57.3% of households reported known diseases before enrollment, with 49.8% attending routine health check-ups in the past year. The primary motivation for enrollment was encouragement from insurance employees (50.2%). The data indicates that 12.5% of enrolled households experienced CHE versus 7.5% among non-enrolled. Enrollment into NHIP does not contribute to lower CHE (AOR: 1.98, 95% CI: 1.21-3.24). Key factors associated with increased CHE risk were presence of non-communicable diseases (NCDs) (AOR: 3.94, 95% CI: 2.10-7.39), acute illnesses/injuries (AOR: 6.70, 95% CI: 3.97-11.30), larger household size (AOR: 3.09, 95% CI: 1.81-5.28), and households below the poverty line (AOR: 5.82, 95% CI: 3.05-11.09). Other factors such as gender, education level, caste/ethnicity, presence of elderly members, and under-five children also showed varying associations with CHE, though not all were statistically significant. The study concludes that enrollment in the NHIP does not significantly reduce the risk of CHE. The reason for this could be inadequate coverage, where high-cost medicines, treatments, and transportation costs are not fully included in the insurance package, leading to significant out-of-pocket expenses. We also considered the long waiting time, lack of medicines, and complex procedures for the utilization of NHIP benefits, which might result in the underuse of covered services. Finally, gaps in enrollment and retention might leave certain households vulnerable to CHE despite the existence of NHIP. Key factors contributing to increased CHE include NCDs, acute illnesses, larger household sizes, and poverty. To improve the program’s effectiveness, it is recommended that NHIP benefits and coverage be expanded to better protect against high healthcare costs. Additionally, simplifying the renewal process, addressing long waiting times, and enhancing the availability of services could improve member satisfaction and retention. Targeted financial protection measures should be implemented for high-risk groups, and efforts should be made to increase awareness and encourage routine health check-ups to prevent severe health issues that contribute to CHE.Keywords: catastrophic health expenditure, effectiveness, national health insurance program, Nepal
Procedia PDF Downloads 2512691 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery
Authors: Jay Ananth
Abstract:
The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development
Procedia PDF Downloads 11112690 A Deep Learning Based Integrated Model For Spatial Flood Prediction
Authors: Vinayaka Gude Divya Sampath
Abstract:
The research introduces an integrated prediction model to assess the susceptibility of roads in a future flooding event. The model consists of deep learning algorithm for forecasting gauge height data and Flood Inundation Mapper (FIM) for spatial flooding. An optimal architecture for Long short-term memory network (LSTM) was identified for the gauge located on Tangipahoa River at Robert, LA. Dropout was applied to the model to evaluate the uncertainty associated with the predictions. The estimates are then used along with FIM to identify the spatial flooding. Further geoprocessing in ArcGIS provides the susceptibility values for different roads. The model was validated based on the devastating flood of August 2016. The paper discusses the challenges for generalization the methodology for other locations and also for various types of flooding. The developed model can be used by the transportation department and other emergency response organizations for effective disaster management.Keywords: deep learning, disaster management, flood prediction, urban flooding
Procedia PDF Downloads 14712689 An Analytical Study on the Effect of Chronic Liver Disease Severity and Etiology on Lipid Profiles
Authors: Thinakar Mani Balusamy, Venkateswaran A. R., Bharat Narasimhan, Ratnakar Kini S., Kani Sheikh M., Prem Kumar K., Pugazhendi Thangavelu, Arun Murugan, Sibi Thooran Karmegam, Radhakrishnan N., Mohammed Noufal, Amit Soni
Abstract:
Background and Aims: The liver is integral to lipid metabolism, and a compromise in its function leads to perturbations in these pathways. In this study, we hope to determine the correlation between CLD severity and its effect on lipid parameters. We also look at the etiology-specific effects on lipid levels. Materials and Methods: This is a retrospective cross-sectional analysis of 250 patients with cirrhosis compared to 250 healthy age and sex-matched controls. Severity assessment of CLD using MELD and Child-Pugh scores was performed and etiological details collected. A questionnaire was used to obtain patient demographic details and lastly, a fasting lipid profile (Total, LDL, HDL cholesterol, Triglycerides and VLDL) was obtained. Results: All components of the lipid profile declined linearly with increasing severity of CLD as determined by MELD and Child-Pugh scores. Lipid levels were clearly lower in CLD patients as compared to healthy controls. Interestingly, preliminary analysis indicated that CLD of different etiologies had differential effects on Lipid profiles. This aspect is under further analysis. Conclusion: All components of the lipid profile were definitely lower in CLD patients as compared to controls and demonstrated an inverse correlation with increasing severity. The utilization of this parameter as a prognosticating aid requires further study. Additionally, preliminary analysis indicates that various CLD etiologies appear to have specific effects on the lipid profile – a finding under further analysis.Keywords: CLD, cholesterol, HDL, LDL, lipid profile, triglycerides, VLDL
Procedia PDF Downloads 22012688 Standard Languages for Creating a Database to Display Financial Statements on a Web Application
Authors: Vladimir Simovic, Matija Varga, Predrag Oreski
Abstract:
XHTML and XBRL are the standard languages for creating a database for the purpose of displaying financial statements on web applications. Today, XBRL is one of the most popular languages for business reporting. A large number of countries in the world recognize the role of XBRL language for financial reporting and the benefits that the reporting format provides in the collection, analysis, preparation, publication and the exchange of data (information) which is the positive side of this language. Here we present all advantages and opportunities that a company may have by using the XBRL format for business reporting. Also, this paper presents XBRL and other languages that are used for creating the database, such XML, XHTML, etc. The role of the AJAX complex model and technology will be explained in detail, and during the exchange of financial data between the web client and web server. Here will be mentioned basic layers of the network for data exchange via the web.Keywords: XHTML, XBRL, XML, JavaScript, AJAX technology, data exchange
Procedia PDF Downloads 39412687 Single-Camera Basketball Tracker through Pose and Semantic Feature Fusion
Authors: Adrià Arbués-Sangüesa, Coloma Ballester, Gloria Haro
Abstract:
Tracking sports players is a widely challenging scenario, specially in single-feed videos recorded in tight courts, where cluttering and occlusions cannot be avoided. This paper presents an analysis of several geometric and semantic visual features to detect and track basketball players. An ablation study is carried out and then used to remark that a robust tracker can be built with Deep Learning features, without the need of extracting contextual ones, such as proximity or color similarity, nor applying camera stabilization techniques. The presented tracker consists of: (1) a detection step, which uses a pretrained deep learning model to estimate the players pose, followed by (2) a tracking step, which leverages pose and semantic information from the output of a convolutional layer in a VGG network. Its performance is analyzed in terms of MOTA over a basketball dataset with more than 10k instances.Keywords: basketball, deep learning, feature extraction, single-camera, tracking
Procedia PDF Downloads 13812686 Using Pyrolitic Carbon Black Obtained from Scrap Tires as an Adsorbent for Chromium (III) Removal from Water
Authors: Mercedeh Malekzadeh
Abstract:
Scrap tires are the source of wastes that cause the environmental problems. The major components of these tires are rubber and carbon black. These components can be used again for different applications by utilizing physical and chemical processes. Pyrolysis is a way that converts rubber portion of scrap tires to oil and gas and the carbon black recovers to pyrolytic carbon black. This pyrolytic carbon black can be used to reinforce rubber and metal, coating preparation, electronic thermal manager and so on. The porous structure of this carbon black also makes it as a suitable choice for heavy metals removal from water. In this work, the application of base treated pyrolytic carbon black was studied as an adsorbent for chromium (III) removal from water in a batch process. Pyrolytic carbon blacks in two natural and base treated forms were characterized by scanning electron microscopy and energy dispersive analysis x-ray. The effects of adsorbent dosage, contact time, initial concentration of chromium (III) and pH were considered on the adsorption process. The adsorption capacity was 19.76 mg/g. Maximum adsorption was seen after 120 min at pH=3. The equilibrium data were considered and better fitted to Langmuir model. The adsorption kinetic was evaluated and confirmed with the pseudo second order kinetic. Results have shown that the base treated pyrolytic carbon black obtained from scrap tires can be used as a cheap adsorbent for removal of chromium (III) from the water.Keywords: chromium (III), pyrolytic carbon, scrap tire, water
Procedia PDF Downloads 20012685 Analyze and Visualize Eye-Tracking Data
Authors: Aymen Sekhri, Emmanuel Kwabena Frimpong, Bolaji Mubarak Ayeyemi, Aleksi Hirvonen, Matias Hirvonen, Tedros Tesfay Andemichael
Abstract:
Fixation identification, which involves isolating and identifying fixations and saccades in eye-tracking protocols, is an important aspect of eye-movement data processing that can have a big impact on higher-level analyses. However, fixation identification techniques are frequently discussed informally and rarely compared in any meaningful way. With two state-of-the-art algorithms, we will implement fixation detection and analysis in this work. The velocity threshold fixation algorithm is the first algorithm, and it identifies fixation based on a threshold value. For eye movement detection, the second approach is U'n' Eye, a deep neural network algorithm. The goal of this project is to analyze and visualize eye-tracking data from an eye gaze dataset that has been provided. The data was collected in a scenario in which individuals were shown photos and asked whether or not they recognized them. The results of the two-fixation detection approach are contrasted and visualized in this paper.Keywords: human-computer interaction, eye-tracking, CNN, fixations, saccades
Procedia PDF Downloads 13512684 Localization Mobile Beacon Using RSSI
Authors: Sallama Resen, Celal Öztürk
Abstract:
Distance estimation between tow nodes has wide scope of surveillance and tracking applications. This paper suggests a Bluetooth Low Energy (BLE) technology as a media for transceiver and receiver signal in small indoor areas. As an example, BLE communication technologies used in child safety domains. Local network is designed to detect child position in indoor school area consisting Mobile Beacons (MB), Access Points (AP) and Smart Phones (SP) where MBs stuck in children’s shoes as wearable sensors. This paper presents a technique that can detect mobile beacons’ position and help finding children’s location within dynamic environment. By means of bluetooth beacons that are attached to child’s shoes, the distance between the MB and teachers SP is estimated with an accuracy of less than one meter. From the simulation results, it is shown that high accuracy of position coordinates are achieved for multi-mobile beacons in different environments.Keywords: bluetooth low energy, child safety, mobile beacons, received signal strength
Procedia PDF Downloads 34612683 Modal Analysis for Optimal Location of Doubly Fed Induction-Generator-Based Wind Farms for Reduction of Small Signal Oscillation
Authors: Meet Patel, Darshan Patel, Nilay Shah
Abstract:
Excess growth of wind-based renewable energy sources is required to identify the optimal location and damping capacity of doubly fed induction-generator-based (DFIG) wind farms while it penetrates into the transmission network. In this analysis, various ratings of DFIG wind farms are penetrated into the Single Machine Infinite Bus (SMIB ) at a different distance of the transmission line. On the basis of detailed examinations, a prime position is evaluated to maximize the stability of overall systems. A damping controller is designed at an optimum location to mitigate the small oscillations. The proposed model was validated using eigenvalue analysis, calculation of the participation factor, and time-domain simulation.Keywords: DFIG, small signal stability, eigenvalues, time domain simulation
Procedia PDF Downloads 11312682 Attack Redirection and Detection using Honeypots
Authors: Chowduru Ramachandra Sharma, Shatunjay Rawat
Abstract:
A false positive state is when the IDS/IPS identifies an activity as an attack, but the activity is acceptable behavior in the system. False positives in a Network Intrusion Detection System ( NIDS ) is an issue because they desensitize the administrator. It wastes computational power and valuable resources when rules are not tuned properly, which is the main issue with anomaly NIDS. Furthermore, most false positives reduction techniques are not performed during the real-time of attempted intrusions; instead, they have applied afterward on collected traffic data and generate alerts. Of course, false positives detection in ‘offline mode’ is tremendously valuable. Nevertheless, there is room for improvement here; automated techniques still need to reduce False Positives in real-time. This paper uses the Snort signature detection model to redirect the alerted attacks to Honeypots and verify attacks.Keywords: honeypot, TPOT, snort, NIDS, honeybird, iptables, netfilter, redirection, attack detection, docker, snare, tanner
Procedia PDF Downloads 15612681 Urban Growth Prediction Using Artificial Neural Networks in Athens, Greece
Authors: Dimitrios Triantakonstantis, Demetris Stathakis
Abstract:
Urban areas have been expanded throughout the globe. Monitoring and modeling urban growth have become a necessity for a sustainable urban planning and decision making. Urban prediction models are important tools for analyzing the causes and consequences of urban land use dynamics. The objective of this research paper is to analyze and model the urban change, which has been occurred from 1990 to 2000 using CORINE land cover maps. The model was developed using drivers of urban changes (such as road distance, slope, etc.) under an Artificial Neural Network modeling approach. Validation was achieved using a prediction map for 2006 which was compared with a real map of Urban Atlas of 2006. The accuracy produced a Kappa index of agreement of 0,639 and a value of Cramer's V of 0,648. These encouraging results indicate the importance of the developed urban growth prediction model which using a set of available common biophysical drivers could serve as a management tool for the assessment of urban change.Keywords: artificial neural networks, CORINE, urban atlas, urban growth prediction
Procedia PDF Downloads 52912680 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries
Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni
Abstract:
In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm
Procedia PDF Downloads 11712679 Means of Securing Graves in the Egyptian Kingdom Era
Authors: Mohamed Ahmed Madkour, Haitham Magdy Hamad
Abstract:
This research aims to study the means of securing graves in the Egyptian kingdom era, and revolves around many basic ideas used by the ancient Egyptian to protect his graves from thieves, which included architectural characteristics, which gave it importance only others. The most important of which was the choice of the place of the grave, which chose a kohl place in the desert to protect the graves, which is the valley of kings, and whether the choice of that place had an impact in protecting the grave or not, in addition to other elements followed in the architectural planning, which was in the valley of kings. The multiplicity of the tomb, the construction of the well chamber to deceive the thieves by the end of the graves suddenly, the construction of the wells of the tombs, which contained the burial chamber at the bottom of the main well and the effect of all these factors on the graves, and this shows the importance of the graves to the ancient Egyptian and his belief in resurrection and immortality. The Egyptian resorted to the elements of protection and was a religious worker by The protector gods and special texts to protect the deceased from any danger to protect the tomb. As for the human factor of securing the tomb through human guards (police) and security teams based on the guard and the words indicating the protection and the guard teams and the teams of the majai. The most important developments that arose on the cemetery from Tamit entrance, corridors, chambers, burial chamber and coffin, and the use of sand to close the well after from one cemetery to another and from time to time where it was built in the late period inside the temple campus to be under the attention of the priests and their protection, as the study dealt with an analytical study For the means of securing graves in the Egyptian kingdom period.Keywords: archaeology, Egyptian kingdom era, graves, tombs, ancient Egyptian
Procedia PDF Downloads 7112678 Study of Natural Patterns on Digital Image Correlation Using Simulation Method
Authors: Gang Li, Ghulam Mubashar Hassan, Arcady Dyskin, Cara MacNish
Abstract:
Digital image correlation (DIC) is a contactless full-field displacement and strain reconstruction technique commonly used in the field of experimental mechanics. Comparing with physical measuring devices, such as strain gauges, which only provide very restricted coverage and are expensive to deploy widely, the DIC technique provides the result with full-field coverage and relative high accuracy using an inexpensive and simple experimental setup. It is very important to study the natural patterns effect on the DIC technique because the preparation of the artificial patterns is time consuming and hectic process. The objective of this research is to study the effect of using images having natural pattern on the performance of DIC. A systematical simulation method is used to build simulated deformed images used in DIC. A parameter (subset size) used in DIC can have an effect on the processing and accuracy of DIC and even cause DIC to failure. Regarding to the picture parameters (correlation coefficient), the higher similarity of two subset can lead the DIC process to fail and make the result more inaccurate. The pictures with good and bad quality for DIC methods have been presented and more importantly, it is a systematic way to evaluate the quality of the picture with natural patterns before they install the measurement devices.Keywords: Digital Image Correlation (DIC), deformation simulation, natural pattern, subset size
Procedia PDF Downloads 42012677 Bio-Oil Compounds Sorption Enhanced Steam Reforming
Authors: Esther Acha, Jose Cambra, De Chen
Abstract:
Hydrogen is considered an important energy vector for the 21st century. Nowadays there are some difficulties for hydrogen economy implantation, and one of them is the high purity required for hydrogen. This energy vector is still being mainly produced from fuels, from wich hydrogen is produced as a component of a mixture containing other gases, such as CO, CO2 and H2O. A forthcoming sustainable pathway for hydrogen is steam-reforming of bio-oils derived from biomass, e.g. via fast pyrolysis. Bio-oils are a mixture of acids, alcohols, aldehydes, esters, ketones, sugars phenols, guaiacols, syringols, furans, multi-functional compounds and also up to a 30 wt% of water. The sorption enhanced steam reforming (SESR) process is attracting a great deal of attention due to the fact that it combines both hydrogen production and CO2 separation. In the SESR process, carbon dioxide is captured by an in situ sorbent, which shifts the reversible reforming and water gas shift reactions to the product side, beyond their conventional thermodynamic limits, giving rise to a higher hydrogen production and lower cost. The hydrogen containing mixture has been obtained from the SESR of bio-oil type compounds. Different types of catalysts have been tested. All of them contain Ni at around a 30 wt %. Two samples have been prepared with the wet impregnation technique over conventional (gamma alumina) and non-conventional (olivine) supports. And a third catalysts has been prepared over a hydrotalcite-like material (HT). The employed sorbent is a commercial dolomite. The activity tests were performed in a bench-scale plant (PID Eng&Tech), using a stainless steel fixed bed reactor. The catalysts were reduced in situ in the reactor, before the activity tests. The effluent stream was cooled down, thus condensed liquid was collected and weighed, and the gas phase was analysed online by a microGC. The hydrogen yield, and process behavior was analysed without the sorbent (the traditional SR where a second purification step will be needed but that operates in steady state) and the SESR (where the purification step could be avoided but that operates in batch state). The influence of the support type and preparation method will be observed in the produced hydrogen yield. Additionally, the stability of the catalysts is critical, due to the fact that in SESR process sorption-desorption steps are required. The produced hydrogen yield and hydrogen purity has to be high and also stable, even after several sorption-desorption cycles. The prepared catalysts were characterized employing different techniques to determine the physicochemical properties of the fresh-reduced and used (after the activity tests) materials. The characterization results, together with the activity results show the influence of the catalysts preparation method, calcination temperature, or can even explain the observed yield and conversion.Keywords: CO2 sorbent, enhanced steam reforming, hydrogen
Procedia PDF Downloads 57912676 Experimental and Numerical Evaluation of a Shaft Failure Behaviour Using Three-Point Bending Test
Authors: Bernd Engel, Sara Salman Hassan Al-Maeeni
Abstract:
A substantial amount of natural resources are nowadays consumed at a growing rate, as humans all over the world used materials obtained from the Earth. Machinery manufacturing industry is one of the major resource consumers on a global scale. Even though the incessant finding out of the new material, metals, and resources, it is urgent for the industry to develop methods to use the Earth's resources intelligently and more sustainable than before. Re-engineering of machine tools regarding design and failure analysis is an approach whereby out-of-date machines are upgraded and returned to useful life. To ensure the reliable future performance of the used machine components, it is essential to investigate the machine component failure through the material, design, and surface examinations. This paper presents an experimental approach aimed at inspecting the shaft of the rotary draw bending machine as a case to study. The testing methodology, which is based on the principle of the three-point bending test, allows assessing the shaft elastic behavior under loading. Furthermore, the shaft elastic characteristics include the maximum linear deflection, and maximum bending stress was determined by using an analytical approach and finite element (FE) analysis approach. In the end, the results were compared with the ones obtained by the experimental approach. In conclusion, it is seen that the measured bending deflection and bending stress were well close to the permissible design value. Therefore, the shaft can work in the second life cycle. However, based on previous surface tests conducted, the shaft needs surface treatments include re-carburizing and refining processes to ensure the reliable surface performance.Keywords: deflection, FE analysis, shaft, stress, three-point bending
Procedia PDF Downloads 15812675 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data
Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin
Abstract:
Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.Keywords: big data, machine learning, ontology model, urban data model
Procedia PDF Downloads 41812674 Social Technology and Youth Justice: An Exploration of Ethical and Practical Challenges
Authors: Ravinder Barn, Balbir Barn
Abstract:
This paper outlines ethical and practical challenges in the building of social technology for use with socially excluded and marginalised groups. The primary aim of this study was to design, deploy and evaluate social technology that may help to promote better engagement between case workers and young people to help prevent recidivism, and support young people’s transition towards social inclusion in society. A total of 107 practitioners/managers (n=64), and young people (n=43) contributed to the data collection via surveys, focus groups and 1-1 interviews. Through a process of co-design where end-users are involved as key contributors to social technological design, this paper seeks to make an important contribution to the area of participatory methodologies by arguing that whilst giving ‘voice’ to key stakeholders in the research process is crucial, there is a risk that competing voices may lead to tensions and unintended outcomes. The paper is contextualized within a Foucauldian perspective to examine significant concepts including power, authority and surveillance. Implications for youth justice policy and practice are considered. The authors conclude that marginalized youth and over-stretched practitioners are better served when such social technology is perceived and adopted as a tool of empowerment within a framework of child welfare and child rights.Keywords: youth justice, social technology, marginalization, participatory research, power
Procedia PDF Downloads 44812673 Ethical Decision-Making in AI and Robotics Research: A Proposed Model
Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet
Abstract:
Researchers in the fields of AI and Robotics frequently encounter ethical dilemmas throughout their research endeavors. Various ethical challenges have been pinpointed in the existing literature, including biases and discriminatory outcomes, diffusion of responsibility, and a deficit in transparency within AI operations. This research aims to pinpoint these ethical quandaries faced by researchers and shed light on the mechanisms behind ethical decision-making in the research process. By synthesizing insights from existing literature and acknowledging prevalent shortcomings, such as overlooking the heterogeneous nature of decision-making, non-accumulative results, and a lack of consensus on numerous factors due to limited empirical research, the objective is to conceptualize and validate a model. This model will incorporate influences from individual perspectives and situational contexts, considering potential moderating factors in the ethical decision-making process. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focusing on collaborative robotics for several months. Subsequently, semi-structured interviews with 16 team members were conducted. The entire process took place during the first semester of 2023. Observations were analyzed using an analysis grid, and the interviews underwent thematic analysis using Nvivo software. An initial finding involves identifying the ethical challenges that AI/robotics researchers confront, underlining a disparity between practical applications and theoretical considerations regarding ethical dilemmas in the realm of AI. Notably, researchers in AI prioritize the publication and recognition of their work, sparking the genesis of these ethical inquiries. Furthermore, this article illustrated that researchers tend to embrace a consequentialist ethical framework concerning safety (for humans engaging with robots/AI), worker autonomy in relation to robots, and the societal implications of labor (can robots displace jobs?). A second significant contribution entails proposing a model for ethical decision-making within the AI/Robotics research sphere. The model proposed adopts a process-oriented approach, delineating various research stages (topic proposal, hypothesis formulation, experimentation, conclusion, and valorization). Across these stages and the ethical queries, they entail, a comprehensive four-point comprehension of ethical decision-making is presented: recognition of the moral quandary; moral judgment, signifying the decision-maker's aptitude to discern the morally righteous course of action; moral intention, reflecting the ability to prioritize moral values above others; and moral behavior, denoting the application of moral intention to the situation. Variables such as political inclinations ((anti)-capitalism, environmentalism, veganism) seem to wield significant influence. Moreover, age emerges as a noteworthy moderating factor. AI and robotics researchers are continually confronted with ethical dilemmas during their research endeavors, necessitating thoughtful decision-making. The contribution involves introducing a contextually tailored model, derived from meticulous observations and insightful interviews, enabling the identification of factors that shape ethical decision-making at different stages of the research process.Keywords: ethical decision making, artificial intelligence, robotics, research
Procedia PDF Downloads 7912672 Prioritizing the TQM Enablers and IT Resources in the ICT Industry: An AHP Approach
Authors: Suby Khanam, Faisal Talib, Jamshed Siddiqui
Abstract:
Total Quality Management (TQM) is a managerial approach that improves the competitiveness of the industry, meanwhile Information technology (IT) was introduced with TQM for handling the technical issues which is supported by quality experts for fulfilling the customers’ requirement. Present paper aims to utilise AHP (Analytic Hierarchy Process) methodology to priorities and rank the hierarchy levels of TQM enablers and IT resource together for its successful implementation in the Information and Communication Technology (ICT) industry. A total of 17 TQM enablers (nine) and IT resources (eight) were identified and partitioned into 3 categories and were prioritised by AHP approach. The finding indicates that the 17 sub-criteria can be grouped into three main categories namely organizing, tools and techniques, and culture and people. Further, out of 17 sub-criteria, three sub-criteria: Top management commitment and support, total employee involvement, and continuous improvement got highest priority whereas three sub-criteria such as structural equation modelling, culture change, and customer satisfaction got lowest priority. The result suggests a hierarchy model for ICT industry to prioritise the enablers and resources as well as to improve the TQM and IT performance in the ICT industry. This paper has some managerial implication which suggests the managers of ICT industry to implement TQM and IT together in their organizations to get maximum benefits and how to utilize available resources. At the end, conclusions, limitation, future scope of the study are presented.Keywords: analytic hierarchy process, information technology, information and communication technology, prioritization, total quality management
Procedia PDF Downloads 34912671 Treatment of Grey Water from Different Restaurants in FUTA Using Fungi
Authors: F. A. Ogundolie, F. Okogue, D. V. Adegunloye
Abstract:
Greywater samples were obtained from three restaurants in the Federal University of Technology; Akure coded SSR, MGR and GGR. Fungi isolates obtained include Rhizopus stolonifer, Aspergillus niger, Mucor mucedo, Aspergillus flavus, Saccharomyces cerevisiae. Of these fungi isolates obtained, R. stolonifer, A. niger and A. flavus showed significant degradation ability on grey water and was used for this research. A simple bioreactor was constructed using biodegradation process in purification of waste water samples. Waste water undergoes primary treatment; secondary treatment involves the introduction of the isolated organisms into the waste water sample and the tertiary treatment which involved the use of filter candle and the sand bed filtration process to achieve the end product without the use of chemicals. A. niger brought about significant reduction in both the bacterial load and the fungi load of the greywater samples of the three respective restaurants with a reduction of (1.29 × 108 to 1.57 × 102 cfu/ml; 1.04 × 108 to 1.12 × 102 cfu/ml and 1.72 × 108 to 1.60 × 102 cfu/ml) for bacterial load in SSR, MGR and GGR respectively. Reduction of 2.01 × 104 to 1.2 × 101; 1.72 × 104 to 1.1 × 101, and 2.50 × 104 to 1.5 × 101 in fungi load from SSR, MGR and GGR respectively. Result of degradation of these selected waste water by the fungi showed that A. niger was probably more potent in the degradation of organic matter and hence, A. niger could be used in the treatment of wastewater.Keywords: Aspergillus niger, greywater, bacterial, fungi, microbial load, bioreactor, biodegradation, purification, organic matter and filtration
Procedia PDF Downloads 31312670 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 26212669 Smoker Recognition from Lung X-Ray Images Using Convolutional Neural Network
Authors: Moumita Chanda, Md. Fazlul Karim Patwary
Abstract:
Smoking is one of the most popular recreational drug use behaviors, and it contributes to birth defects, COPD, heart attacks, and erectile dysfunction. To completely eradicate this disease, it is imperative that it be identified and treated. Numerous smoking cessation programs have been created, and they demonstrate how beneficial it may be to help someone stop smoking at the ideal time. A tomography meter is an effective smoking detector. Other wearables, such as RF-based proximity sensors worn on the collar and wrist to detect when the hand is close to the mouth, have been proposed in the past, but they are not impervious to deceptive variables. In this study, we create a machine that can discriminate between smokers and non-smokers in real-time with high sensitivity and specificity by watching and collecting the human lung and analyzing the X-ray data using machine learning. If it has the highest accuracy, this machine could be utilized in a hospital, in the selection of candidates for the army or police, or in university entrance.Keywords: CNN, smoker detection, non-smoker detection, OpenCV, artificial Intelligence, X-ray Image detection
Procedia PDF Downloads 8412668 Action Research of Local Resident Empowerment in Prambanan Cultural Heritage Area in Yogyakarta
Authors: Destha Titi Raharjana
Abstract:
The finding of this research results from three action researches conducted in three rurals, namely Bokoharjo, Sambirejo, and Tirtomartani. Those rurals are close to Prambanan, a well-known cultural heritage site located in Sleman Regency, Indonesia. This action research is conducted using participative method through observation, interview, and focus group discussion with local residents as the subjects. This research aims to (a) present identifications of potencies, obstacles, and opportunities existed in development process, which is able to give more encouragement, involvement and empowerment for local residents in maintaining the cultural heritage area, (b) present participatory empowerment programs which adjust the needs of local residents and human resources, and (c) identify potential stakeholders that can support empowerment programs. Through action research method, this research is able to present (a) potential mapping; difficulties and opportunities in the development process in each rural, (b) empowerment program planning needed by local residents as a follow-up of this action research. Moreover, this research also presents identifications of potential stakeholders who are able to do an empowerment program follow-up. It is expected that, at the end of the programs, the local residents are able to maintain Prambanan, as one of cultural heritage sites that needs to be protected, in a more sustainable way.Keywords: action research, local resident, empowerment, cultural heritage area, Prambanan, Sleman, Indonesia
Procedia PDF Downloads 25112667 Separation of Copper(II) and Iron(III) by Solvent Extraction and Membrane Processes with Ionic Liquids as Carriers
Authors: Beata Pospiech
Abstract:
Separation of metal ions from aqueous solutions is important as well as difficult process in hydrometallurgical technology. This process is necessary for obtaining of clean metals. Solvent extraction and membrane processes are well known as separation methods. Recently, ionic liquids (ILs) are very often applied and studied as extractants and carriers of metal ions from aqueous solutions due to their good extractability properties for various metals. This work discusses a method to separate copper(II) and iron(III) from hydrochloric acid solutions by solvent extraction and transport across polymer inclusion membranes (PIM) with the selected ionic liquids as extractants/ion carriers. Cyphos IL 101 (trihexyl(tetradecyl)phosphonium chloride), Cyphos IL 104 (trihexyl(tetradecyl)phosphonium bis(2,4,4 trimethylpentyl)phosphi-nate), trioctylmethylammonium thiosalicylate [A336][TS] and trihexyl(tetradecyl)phosphonium thiosalicylate [PR4][TS] were used for the investigations. Effect of different parameters such as hydrochloric acid concentration in aqueous phase on iron(III) and copper(II) extraction has been investigated. Cellulose triacetate membranes with the selected ionic liquids as carriers have been prepared and applied for transport of iron(IIII) and copper(II) from hydrochloric acid solutions.Keywords: copper, iron, ionic liquids, solvent extraction
Procedia PDF Downloads 27912666 Rule-Of-Mixtures: Predicting the Bending Modulus of Unidirectional Fiber Reinforced Dental Composites
Authors: Niloofar Bahramian, Mohammad Atai, Mohammad Reza Naimi-Jamal
Abstract:
Rule of mixtures is the simple analytical model is used to predict various properties of composites before design. The aim of this study was to demonstrate the benefits and limitations of the Rule-of-Mixtures (ROM) for predicting bending modulus of a continuous and unidirectional fiber reinforced composites using in dental applications. The Composites were fabricated from light curing resin (with and without silica nanoparticles) and modified and non-modified fibers. Composite samples were divided into eight groups with ten specimens for each group. The bending modulus (flexural modulus) of samples was determined from the slope of the initial linear region of stress-strain curve on 2mm×2mm×25mm specimens with different designs: fibers corona treatment time (0s, 5s, 7s), fibers silane treatment (0%wt, 2%wt), fibers volume fraction (41%, 33%, 25%) and nanoparticles incorporation in resin (0%wt, 10%wt, 15%wt). To study the fiber and matrix interface after fracture, single edge notch beam (SENB) method and scanning electron microscope (SEM) were used. SEM also was used to show the nanoparticles dispersion in resin. Experimental results of bending modulus for composites made of both physical (corona) and chemical (silane) treated fibers were in reasonable agreement with linear ROM estimates, but untreated fibers or non-optimized treated fibers and poor nanoparticles dispersion did not correlate as well with ROM results. This study shows that the ROM is useful to predict the mechanical behavior of unidirectional dental composites but fiber-resin interface and quality of nanoparticles dispersion play important role in ROM accurate predictions.Keywords: bending modulus, fiber reinforced composite, fiber treatment, rule-of-mixtures
Procedia PDF Downloads 27412665 Characteristics and Quality of Chilean Abalone Undergoing Different Drying Emerging Technologies
Authors: Mario Pérez-Won, Anais Palma-Acevedo, Luis González-Cavieres, Roberto Lemus-Mondaca, Gipsy Tabilo-Munizaga
Abstract:
The Chilean abalone (Concholepas Concholepas) is a gastropod mollusk; it has a high commercial value due to the qualities of its meat, especially hardness, as a critical acceptance parameter. However, its main problem is its short shelf-life which is usually extended using traditional technologies with high energy consumption. Therefore, applying different technologies for the pre-treatment and drying process is necessary. In this research, pulsed electric field (PEF) was used as a pre-treatment for vacuum microwave drying (VMD), freeze-drying (FD), and hot-air drying (HAD). Drying conditions and characteristics were set according to previous experiments. The Drying samples were analyzed in terms of physical quality (color, texture, microstructure, and rehydration capacity), protein quality (degree of hydrolysis and computer protein efficiency ratio), and energy parameters. Regarding quality, the treatment that obtained lower harness was PEF+FD (195 N ± 10), the lowest change of color was for treatment PEF+VMD (ΔE: 17 ± 1.5), and the best rehydration capacity was for treatment PEF+VMD (1.2 h for equilibrium). For protein quality, the highest Computer-Protein Efficiency Ratio was the sample 2.0 kV/ cm of PEF (index of 4.18 ± 0.26 at the end of the digestion). Moreover, about energetic consumption, results show that VMD decreases the drying process by 97% whether PEF was used or not. Consequently, it is possible to conclude that using PEF as a pre-treatment for VMD and FD treatments has advantages that must be used following the consumer’s needs or preferences.Keywords: chilean abalone, freeze-drying, proteins, pulsed electric fields
Procedia PDF Downloads 10912664 Formal Innovations vs. Informal Innovations: The Case of the Mining Sector in Nigeria
Authors: Jegede Oluseye Oladayo
Abstract:
The study mapped innovation activities in the formal and informal mining sector in Nigeria. Data were collected through primary and secondary sources. Primary data were collected through guided questionnaire administration, guided interviews and personal observation. A purposive sampling method was adopted to select firms that are micro, small and medium enterprises. The study covered 100 (50 in the formal sector and 50 in the informal sector) purposively selected companies in south-western Nigeria. Secondary data were collected from different published sources. Data were analysed using descriptive and inferential statistics. Of the four types of technological innovations sampled, organisational innovation was found to be highest both in the formal (100%) and informal (100%) sectors, followed by process innovation: 60% in the formal sector and 28% in the informal sector, marketing innovation and diffusion based innovation were implemented by 64% and 4% respectively in the formal sector. There were no R&D activities (intramural or extramural) in both sectors, however, innovation activities occur at moderate levels in the formal sector. This is characterised by acquisition of machinery, equipment, hardware (100%), software (56), training (82%) and acquisition of external knowledge (60%) in the formal sector. In the informal sector, innovation activities were characterised by acquisition of external knowledge (100%), training/learning by experience (100%) and acquisition of tools (68%). The impact of innovation on firm’s performance in the formal sector was expressed mainly as increased capacity of production (100%), reduced production cost per unit of labour (88%), compliance with governmental regulatory requirements (72%) and entry on new markets (60%). In the informal sector, the impact of innovation was mainly expressed in improved flexibility of production (70%) and machinery/energy efficiency (70%). The important technological driver of process innovation in the mining sector was acquisition of machinery which accounts for the prevalence of 100% both in the formal and informal sectors. Next to this is training and re-training of technical staff, 74% in both the formal and the informal sector. Other factors influencing organisational innovation are skill of workforce with a prevalence of 80% in both the formal and informal sector. The important technological drivers include educational background of the manager/head of technical department (54%) for organisational innovation and (50%) for process innovation in the formal sector. The study concluded that innovation competence of the firms was mostly organisational changes.Keywords: innovation prevalence, innovation activities, innovation performance, innovation drivers
Procedia PDF Downloads 379