Search results for: role model
1225 An Object-Oriented Modelica Model of the Water Level Swell during Depressurization of the Reactor Pressure Vessel of the Boiling Water Reactor
Authors: Rafal Bryk, Holger Schmidt, Thomas Mull, Ingo Ganzmann, Oliver Herbst
Abstract:
Prediction of the two-phase water mixture level during fast depressurization of the Reactor Pressure Vessel (RPV) resulting from an accident scenario is an important issue from the view point of the reactor safety. Since the level swell may influence the behavior of some passive safety systems, it has been recognized that an assumption which at the beginning may be considered as a conservative one, not necessary leads to a conservative result. This paper discusses outcomes obtained during simulations of the water dynamics and heat transfer during sudden depressurization of a vessel filled up to a certain level with liquid water under saturation conditions and with the rest of the vessel occupied by saturated steam. In case of the pressure decrease e.g. due to the main steam line break, the liquid water evaporates abruptly, being a reason thereby, of strong transients in the vessel. These transients and the sudden emergence of void in the region occupied at the beginning by liquid, cause elevation of the two-phase mixture. In this work, several models calculating the water collapse and swell levels are presented and validated against experimental data. Each of the models uses different approach to calculate void fraction. The object-oriented models were developed with the Modelica modelling language and the OpenModelica environment. The models represent the RPV of the Integral Test Facility Karlstein (INKA) – a dedicated test rig for simulation of KERENA – a new Boiling Water Reactor design of Framatome. The models are based on dynamic mass and energy equations. They are divided into several dynamic volumes in each of which, the fluid may be single-phase liquid, steam or a two-phase mixture. The heat transfer between the wall of the vessel and the fluid is taken into account. Additional heat flow rate may be applied to the first volume of the vessel in order to simulate the decay heat of the reactor core in a similar manner as it is simulated at INKA. The comparison of the simulations results against the reference data shows a good agreement.Keywords: boiling water reactor, level swell, Modelica, RPV depressurization, thermal-hydraulics
Procedia PDF Downloads 2091224 A Descriptive Study on Water Scarcity as a One Health Challenge among the Osiram Community, Kajiado County, Kenya
Authors: Damiano Omari, Topirian Kerempe, Dibo Sama, Walter Wafula, Sharon Chepkoech, Chrispine Juma, Gilbert Kirui, Simon Mburu, Susan Keino
Abstract:
The One Health concept was officially adopted by the international organizations and scholarly bodies in 1984. It aims at combining human, animal and environmental components to address global health challenges. Using collaborative efforts optimal health to people, animals, and the environment can be achieved. One health approach plays a significant approach role in prevention and control of zoonosis diseases. It has also been noted that 75% of new emerging human infectious diseases are zoonotic. In Kenya, one health has been embraced and strongly advocated for by One Health East and Central Africa (OHCEA). It was inaugurated on 17th of October 2010 at a historic meeting facilitated by USAID with participants from 7 public health schools, seven faculties of veterinary medicine in Eastern Africa and 2 American universities (Tufts and University of Minnesota) in addition to respond project staff. The study was conducted in Loitoktok Sub County, specifically in the Amboseli Ecosystem. The Amboseli ecosystem covers an area of 5,700 square kilometers and stretches between Mt. Kilimanjaro, Chyulu Hills, Tsavo West National park and the Kenya/Tanzania border. The area is arid to semi-arid and is more suitable for pastoralism with a high potential for conservation of wildlife and tourism enterprises. The ecosystem consists of the Amboseli National Park, which is surrounded by six group ranches which include Kimana, Olgulului, Selengei, Mbirikani, Kuku and Rombo in Loitoktok District. The Manyatta of study was Osiram Cultural Manyatta in Mbirikani group ranch. Apart from visiting the Manyatta, we also visited the sub-county hospital, slaughter slab, forest service, Kimana market, and the Amboseli National Park. The aim of the study was to identify the one health issues facing the community. This was done by a conducting a community needs assessment and prioritization. Different methods were used in data collection for the qualitative and numerical data. They include among others; key informant interviews and focus group discussions. We also guided the community members in drawing their Resource Map this helped identify the major resources in their land and also help them identify some of the issues they were facing. Matrix piling, root cause analysis, and force field analysis tools were used to establish the one health related priority issues facing community members. Skits were also used to present to the community interventions to the major one health issues. Some of the prioritized needs among the community were water scarcity and inadequate markets for their beadwork. The group intervened on the various needs of the Manyatta. For water scarcity, we educated the community on water harvesting methods using gutters as well as proper storage by the use of tanks and earth dams. The community was also encouraged to recycle and conserve water. To improve markets; we educated the community to upload their products online, a page was opened for them and uploading the photos was demonstrated to them. They were also encouraged to be innovative to attract more clients.Keywords: Amboseli ecosystem, community interventions, community needs assessment and prioritization, one health issues
Procedia PDF Downloads 1681223 Modelling of Phase Transformation Kinetics in Post Heat-Treated Resistance Spot Weld of AISI 1010 Mild Steel
Authors: B. V. Feujofack Kemda, N. Barka, M. Jahazi, D. Osmani
Abstract:
Automobile manufacturers are constantly seeking means to reduce the weight of car bodies. The usage of several steel grades in auto body assembling has been found to be a good technique to enlighten vehicles weight. This few years, the usage of dual phase (DP) steels, transformation induced plasticity (TRIP) steels and boron steels in some parts of the auto body have become a necessity because of their lightweight. However, these steels are martensitic, when they undergo a fast heat treatment, the resultant microstructure is essential, made of martensite. Resistance spot welding (RSW), one of the most used techniques in assembling auto bodies, becomes problematic in the case of these steels. RSW being indeed a process were steel is heated and cooled in a very short period of time, the resulting weld nugget is mostly fully martensitic, especially in the case of DP, TRIP and boron steels but that also holds for plain carbon steels as AISI 1010 grade which is extensively used in auto body inner parts. Martensite in its turn must be avoided as most as possible when welding steel because it is the principal source of brittleness and it weakens weld nugget. Thus, this work aims to find a mean to reduce martensite fraction in weld nugget when using RSW for assembling. The prediction of phase transformation kinetics during RSW has been done. That phase transformation kinetics prediction has been made possible through the modelling of the whole welding process, and a technique called post weld heat treatment (PWHT) have been applied in order to reduce martensite fraction in the weld nugget. Simulation has been performed for AISI 1010 grade, and results show that the application of PWHT leads to the formation of not only martensite but also ferrite, bainite and pearlite during the cooling of weld nugget. Welding experiments have been done in parallel and micrographic analyses show the presence of several phases in the weld nugget. Experimental weld geometry and phase proportions are in good agreement with simulation results, showing here the validity of the model.Keywords: resistance spot welding, AISI 1010, modeling, post weld heat treatment, phase transformation, kinetics
Procedia PDF Downloads 1161222 Study on Runoff Allocation Responsibilities of Different Land Uses in a Single Catchment Area
Authors: Chuan-Ming Tung, Jin-Cheng Fu, Chia-En Feng
Abstract:
In recent years, the rapid development of urban land in Taiwan has led to the constant increase of the areas of impervious surface, which has increased the risk of waterlogging during heavy rainfall. Therefore, in recent years, promoting runoff allocation responsibilities has often been used as a means of reducing regional flooding. In this study, the single catchment area covering both urban and rural land as the study area is discussed. Based on Storm Water Management Model, urban and rural land in a single catchment area was explored to develop the runoff allocation responsibilities according to their respective control regulation on land use. The impacts of runoff increment and reduction in sub-catchment area were studied to understand the impact of highly developed urban land on the reduction of flood risk of rural land at the back end. The results showed that the rainfall with 1 hour short delay of 2 years, 5 years, 10 years, and 25 years return period. If the study area was fully developed, the peak discharge at the outlet would increase by 24.46% -22.97% without runoff allocation responsibilities. The front-end urban land would increase runoff from back-end of rural land by 76.19% -46.51%. However, if runoff allocation responsibilities were carried out in the study area, the peak discharge could be reduced by 58.38-63.08%, which could make the front-end to reduce 54.05% -23.81% of the peak flow to the back-end. In addition, the researchers found that if it was seen from the perspective of runoff allocation responsibilities of per unit area, the residential area of urban land would benefit from the relevant laws and regulations of the urban system, which would have a better effect of reducing flood than the residential land in rural land. For rural land, the development scale of residential land was generally small, which made the effect of flood reduction better than that of industrial land. Agricultural land requires a large area of land, resulting in the lowest share of the flow per unit area. From the point of the planners, this study suggests that for the rural land around the city, its responsibility should be assigned to share the runoff. And setting up rain water storage facilities in the same way as urban land, can also take stock of agricultural land resources to increase the ridge of field for flood storage, in order to improve regional disaster reduction capacity and resilience.Keywords: runoff allocation responsibilities, land use, flood mitigation, SWMM
Procedia PDF Downloads 1021221 DEMs: A Multivariate Comparison Approach
Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo
Abstract:
The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variablesKeywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison
Procedia PDF Downloads 1141220 Knowledge Loss Risk Assessment for Departing Employees: An Exploratory Study
Authors: Muhammad Saleem Ullah Khan Sumbal, Eric Tsui, Ricky Cheong, Eric See To
Abstract:
Organizations are posed to a threat of valuable knowledge loss when employees leave either due to retirement, resignation, job change or because of disabilities e.g. death, etc. Due to changing economic conditions, globalization, and aging workforce, organizations are facing challenges regarding retention of valuable knowledge. On the one hand, large number of employees are going to retire in the organizations whereas on the other hand, younger generation does not want to work in a company for a long time and there is an increasing trend of frequent job change among the new generation. Because of these factors, organizations need to make sure that they capture the knowledge of employee before (s)he walks out of the door. The first step in this process is to know what type of knowledge employee possesses and whether this knowledge is important for the organization. Researchers reveal in the literature that despite the serious consequences of knowledge loss in terms of organizational productivity and competitive advantage, there has not been much work done in the area of knowledge loss assessment of departing employees. An important step in the knowledge retention process is to determine the critical ‘at risk’ knowledge. Thus, knowledge loss risk assessment is a process by which organizations can gauge the importance of knowledge of the departing employee. The purpose of this study is to explore this topic of knowledge loss risk assessment by conducting a qualitative study in oil and gas sector. By engaging in dialogues with managers and executives of the organizations through in-depth interviews and adopting a grounded methodology approach, the research will explore; i) Are there any measures adopted by organizations to assess the risk of knowledge loss from departing employees? ii) Which factors are crucial for knowledge loss assessment in the organizations? iii) How can we prioritize the employees for knowledge retention according to their criticality? Grounded theory approach is used when there is not much knowledge available in the area under research and thus new knowledge is generated about the topic through an in-depth exploration of the topic by using methods such as interviews and using a systematic approach to analyze the data. The outcome of the study will generate a model for the risk of knowledge loss through factors such as the likelihood of knowledge loss, the consequence/impact of knowledge loss and quality of the knowledge loss of departing employees. Initial results show that knowledge loss assessment is quite crucial for the organizations and it helps in determining what types of knowledge employees possess e.g. organizations knowledge, subject matter expertise or relationships knowledge. Based on that, it can be assessed which employee is more important for the organizations and how to prioritize the knowledge retention process for departing employees.Keywords: knowledge loss, risk assessment, departing employees, Hong Kong organizations
Procedia PDF Downloads 4061219 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 631218 Prediction of Cardiovascular Markers Associated With Aromatase Inhibitors Side Effects Among Breast Cancer Women in Africa
Authors: Jean Paul M. Milambo
Abstract:
Purpose: Aromatase inhibitors (AIs) are indicated in the treatment of hormone-receptive breast cancer in postmenopausal women in various settings. Studies have shown cardiovascular events in some developed countries. To date the data is sparce for evidence-based recommendations in African clinical settings due to lack of cancer registries, capacity building and surveillance systems. Therefore, this study was conducted to assess the feasibility of HyBeacon® probe genotyping adjunctive to standard care for timely prediction and diagnosis of Aromatase inhibitors (AIs) associated adverse events in breast cancer survivors in Africa. Methods: Cross sectional study was conducted to assess the knowledge of POCT among six African countries using online survey and telephonically contacted. Incremental cost effectiveness ratio (ICER) was calculated, using diagnostic accuracy study. This was based on mathematical modeling. Results: One hundred twenty-six participants were considered for analysis (mean age = 61 years; SD = 7.11 years; 95%CI: 60-62 years). Comparison of genotyping from HyBeacon® probe technology to Sanger sequencing showed that sensitivity was reported at 99% (95% CI: 94.55% to 99.97%), specificity at 89.44% (95% CI: 87.25 to 91.38%), PPV at 51% (95%: 43.77 to 58.26%), and NPV at 99.88% (95% CI: 99.31 to 100.00%). Based on the mathematical model, the assumptions revealed that ICER was R7 044.55. Conclusion: POCT using HyBeacon® probe genotyping for AI-associated adverse events maybe cost effective in many African clinical settings. Integration of preventive measures for early detection and prevention guided by different subtype of breast cancer diagnosis with specific clinical, biomedical and genetic screenings may improve cancer survivorship. Feasibility of POCT was demonstrated but the implementation could be achieved by improving the integration of POCT within primary health cares, referral cancer hospitals with capacity building activities at different level of health systems. This finding is pertinent for a future envisioned implementation and global scale-up of POCT-based initiative as part of risk communication strategies with clear management pathways.Keywords: breast cancer, diagnosis, point of care, South Africa, aromatase inhibitors
Procedia PDF Downloads 751217 Quantitative Seismic Interpretation in the LP3D Concession, Central of the Sirte Basin, Libya
Authors: Tawfig Alghbaili
Abstract:
LP3D Field is located near the center of the Sirt Basin in the Marada Trough approximately 215 km south Marsa Al Braga City. The Marada Trough is bounded on the west by a major fault, which forms the edge of the Beda Platform, while on the east, a bounding fault marks the edge of the Zelten Platform. The main reservoir in the LP3D Field is Upper Paleocene Beda Formation. The Beda Formation is mainly limestone interbedded with shale. The reservoir average thickness is 117.5 feet. To develop a better understanding of the characterization and distribution of the Beda reservoir, quantitative seismic data interpretation has been done, and also, well logs data were analyzed. Six reflectors corresponding to the tops of the Beda, Hagfa Shale, Gir, Kheir Shale, Khalifa Shale, and Zelten Formations were picked and mapped. Special work was done on fault interpretation part because of the complexities of the faults at the structure area. Different attribute analyses were done to build up more understanding of structures lateral extension and to view a clear image of the fault blocks. Time to depth conversion was computed using velocity modeling generated from check shot and sonic data. The simplified stratigraphic cross-section was drawn through the wells A1, A2, A3, and A4-LP3D. The distribution and the thickness variations of the Beda reservoir along the study area had been demonstrating. Petrophysical analysis of wireline logging also was done and Cross plots of some petrophysical parameters are generated to evaluate the lithology of reservoir interval. Structure and Stratigraphic Framework was designed and run to generate different model like faults, facies, and petrophysical models and calculate the reservoir volumetric. This study concluded that the depth structure map of the Beda formation shows the main structure in the area of study, which is north to south faulted anticline. Based on the Beda reservoir models, volumetric for the base case has been calculated and it has STOIIP of 41MMSTB and Recoverable oil of 10MMSTB. Seismic attributes confirm the structure trend and build a better understanding of the fault system in the area.Keywords: LP3D Field, Beda Formation, reservoir models, Seismic attributes
Procedia PDF Downloads 2111216 Leveraging Information for Building Supply Chain Competitiveness
Authors: Deepika Joshi
Abstract:
Operations in automotive industry rely greatly on information shared between Supply Chain (SC) partners. This leads to efficient and effective management of SC activity. Automotive sector in India is growing at 14.2 percent per annum and has huge economic importance. We find that no study has been carried out on the role of information sharing in SC management of Indian automotive manufacturers. Considering this research gap, the present study is planned to establish the significance of information sharing in Indian auto-component supply chain activity. An empirical research was conducted for large scale auto component manufacturers from India. Twenty four Supply Chain Performance Indicators (SCPIs) were collected from existing literature. These elements belong to eight diverse but internally related areas of SC management viz., demand management, cost, technology, delivery, quality, flexibility, buyer-supplier relationship, and operational factors. A pair-wise comparison and an open ended questionnaire were designed using these twenty four SCPIs. The questionnaire was then administered among managerial level employees of twenty-five auto-component manufacturing firms. Analytic Network Process (ANP) technique was used to analyze the response of pair-wise questionnaire. Finally, twenty-five priority indexes are developed, one for each respondent. These were averaged to generate an industry specific priority index. The open-ended questions depicted strategies related to information sharing between buyers and suppliers and their influence on supply chain performance. Results show that the impact of information sharing on certain performance indicators is relatively greater than their corresponding variables. For example, flexibility, delivery, demand and cost related elements have massive impact on information sharing. Technology is relatively less influenced by information sharing but it immensely influence the quality of information shared. Responses obtained from managers reveal that timely and accurate information sharing lowers the cost, increases flexibility and on-time delivery of auto parts, therefore, enhancing the competitiveness of Indian automotive industry. Any flaw in dissemination of information can disturb the cycle time of both the parties and thus increases the opportunity cost. Due to supplier’s involvement in decisions related to design of auto parts, quality conformance is found to improve, leading to reduction in rejection rate. Similarly, mutual commitment to share right information at right time between all levels of SC enhances trust level. SC partners share information to perform comprehensive quality planning to ingrain total quality management. This study contributes to operations management literature which faces scarcity of empirical examination on this subject. It views information sharing as a building block which firms can promote and evolve to leverage the operational capability of all SC members. It will provide insights for Indian managers and researchers as every market is unique and suppliers and buyers are driven by local laws, industry status and future vision. While major emphasis in this paper is given to SC operations happening between domestic partners, placing more focus on international SC can bring in distinguished results.Keywords: Indian auto component industry, information sharing, operations management, supply chain performance indicators
Procedia PDF Downloads 5491215 Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town, Enrolled From 2011 to 2021
Authors: Getahun Nigusie
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: To assess incidence and predictors of mortality among HIV positive children on ART in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, Ethiopia
Procedia PDF Downloads 201214 Investigation of Wind Farm Interaction with Ethiopian Electric Power’s Grid: A Case Study at Ashegoda Wind Farm
Authors: Fikremariam Beyene, Getachew Bekele
Abstract:
Ethiopia is currently on the move with various projects to raise the amount of power generated in the country. The progress observed in recent years indicates this fact clearly and indisputably. The rural electrification program, the modernization of the power transmission system, the development of wind farm is some of the main accomplishments worth mentioning. As it is well known, currently, wind power is globally embraced as one of the most important sources of energy mainly for its environmentally friendly characteristics, and also that once it is installed, it is a source available free of charge. However, integration of wind power plant with an existing network has many challenges that need to be given serious attention. In Ethiopia, a number of wind farms are either installed or are under construction. A series of wind farm is planned to be installed in the near future. Ashegoda Wind farm (13.2°, 39.6°), which is the subject of this study, is the first large scale wind farm under construction with the capacity of 120 MW. The first phase of 120 MW (30 MW) has been completed and is expected to be connected to the grid soon. This paper is concerned with the investigation of the wind farm interaction with the national grid under transient operating condition. The main concern is the fault ride through (FRT) capability of the system when the grid voltage drops to exceedingly low values because of short circuit fault and also the active and reactive power behavior of wind turbines after the fault is cleared. On the wind turbine side, a detailed dynamic modelling of variable speed wind turbine of a 1 MW capacity running with a squirrel cage induction generator and full-scale power electronics converters is done and analyzed using simulation software DIgSILENT PowerFactory. On the Ethiopian electric power corporation side, after having collected sufficient data for the analysis, the grid network is modeled. In the model, a fault ride-through (FRT) capability of the plant is studied by applying 3-phase short circuit on the grid terminal near the wind farm. The results show that the Ashegoda wind farm can ride from voltage deep within a short time and the active and reactive power performance of the wind farm is also promising.Keywords: squirrel cage induction generator, active and reactive power, DIgSILENT PowerFactory, fault ride-through capability, 3-phase short circuit
Procedia PDF Downloads 1711213 Consumer Protection Law For Users Mobile Commerce as a Global Effort to Improve Business in Indonesia
Authors: Rina Arum Prastyanti
Abstract:
Information technology has changed the ways of transacting and enabling new opportunities in business transactions. Problems to be faced by consumers M Commerce, among others, the consumer will have difficulty accessing the full information about the products on offer and the forms of transactions given the small screen and limited storage capacity, the need to protect children from various forms of excess supply and usage as well as errors in access and disseminate personal data, not to mention the more complex problems as well as problems agreements, dispute resolution that can protect consumers and assurance of security of personal data. It is no less important is the risk of payment and personal information of payment dal am also an important issue that should be on the swatch solution. The purpose of this study is 1) to describe the phenomenon of the use of Mobile Commerce in Indonesia. 2) To determine the form of legal protection for the consumer use of Mobile Commerce. 3) To get the right type of law so as to provide legal protection for consumers Mobile Commerce users. This research is a descriptive qualitative research. Primary and secondary data sources. This research is a normative law. Engineering conducted engineering research library collection or library research. The analysis technique used is deductive analysis techniques. Growing mobile technology and more affordable prices as well as low rates of provider competition also affects the increasing number of mobile users, Indonesia is placed into 4 HP users in the world, the number of mobile phones in Indonesia is estimated at around 250.1 million telephones with a population of 237 556. 363. Indonesian form of legal protection in the use of mobile commerce still a part of the Law No. 11 of 2008 on Information and Electronic Transactions and until now there is no rule of law that specifically regulates mobile commerce. Legal protection model that can be applied to protect consumers of mobile commerce users ensuring that consumers get information about potential security and privacy challenges they may face in m commerce and measures that can be used to limit the risk. Encourage the development of security measures and built security features. To encourage mobile operators to implement data security policies and measures to prevent unauthorized transactions. Provide appropriate methods both time and effectiveness of redress when consumers suffer financial loss.Keywords: mobile commerce, legal protection, consumer, effectiveness
Procedia PDF Downloads 3631212 Sound Source Localisation and Augmented Reality for On-Site Inspection of Prefabricated Building Components
Authors: Jacques Cuenca, Claudio Colangeli, Agnieszka Mroz, Karl Janssens, Gunther Riexinger, Antonio D'Antuono, Giuseppe Pandarese, Milena Martarelli, Gian Marco Revel, Carlos Barcena Martin
Abstract:
This study presents an on-site acoustic inspection methodology for quality and performance evaluation of building components. The work focuses on global and detailed sound source localisation, by successively performing acoustic beamforming and sound intensity measurements. A portable experimental setup is developed, consisting of an omnidirectional broadband acoustic source and a microphone array and sound intensity probe. Three main acoustic indicators are of interest, namely the sound pressure distribution on the surface of components such as walls, windows and junctions, the three-dimensional sound intensity field in the vicinity of junctions, and the sound transmission loss of partitions. The measurement data is post-processed and converted into a three-dimensional numerical model of the acoustic indicators with the help of the simultaneously acquired geolocation information. The three-dimensional acoustic indicators are then integrated into an augmented reality platform superimposing them onto a real-time visualisation of the spatial environment. The methodology thus enables a measurement-supported inspection process of buildings and the correction of errors during construction and refurbishment. Two experimental validation cases are shown. The first consists of a laboratory measurement on a full-scale mockup of a room, featuring a prefabricated panel. The latter is installed with controlled defects such as lack of insulation and joint sealing material. It is demonstrated that the combined acoustic and augmented reality tool is capable of identifying acoustic leakages from the building defects and assist in correcting them. The second validation case is performed on a prefabricated room at a near-completion stage in the factory. With the help of the measurements and visualisation tools, the homogeneity of the partition installation is evaluated and leakages from junctions and doors are identified. Furthermore, the integration of acoustic indicators together with thermal and geometrical indicators via the augmented reality platform is shown.Keywords: acoustic inspection, prefabricated building components, augmented reality, sound source localization
Procedia PDF Downloads 3811211 The Regulation of Reputational Information in the Sharing Economy
Authors: Emre Bayamlıoğlu
Abstract:
This paper aims to provide an account of the legal and the regulative aspects of the algorithmic reputation systems with a special emphasis on the sharing economy (i.e., Uber, Airbnb, Lyft) business model. The first section starts with an analysis of the legal and commercial nature of the tripartite relationship among the parties, namely, the host platform, individual sharers/service providers and the consumers/users. The section further examines to what extent an algorithmic system of reputational information could serve as an alternative to legal regulation. Shortcomings are explained and analyzed with specific examples from Airbnb Platform which is a pioneering success in the sharing economy. The following section focuses on the issue of governance and control of the reputational information. The section first analyzes the legal consequences of algorithmic filtering systems to detect undesired comments and how a delicate balance could be struck between the competing interests such as freedom of speech, privacy and the integrity of the commercial reputation. The third section deals with the problem of manipulation by users. Indeed many sharing economy businesses employ certain techniques of data mining and natural language processing to verify consistency of the feedback. Software agents referred as "bots" are employed by the users to "produce" fake reputation values. Such automated techniques are deceptive with significant negative effects for undermining the trust upon which the reputational system is built. The third section is devoted to explore the concerns with regard to data mobility, data ownership, and the privacy. Reputational information provided by the consumers in the form of textual comment may be regarded as a writing which is eligible to copyright protection. Algorithmic reputational systems also contain personal data pertaining both the individual entrepreneurs and the consumers. The final section starts with an overview of the notion of reputation as a communitarian and collective form of referential trust and further provides an evaluation of the above legal arguments from the perspective of public interest in the integrity of reputational information. The paper concludes with certain guidelines and design principles for algorithmic reputation systems, to address the above raised legal implications.Keywords: sharing economy, design principles of algorithmic regulation, reputational systems, personal data protection, privacy
Procedia PDF Downloads 4651210 A Study on the Measurement of Spatial Mismatch and the Influencing Factors of “Job-Housing” in Affordable Housing from the Perspective of Commuting
Authors: Daijun Chen
Abstract:
Affordable housing is subsidized by the government to meet the housing demand of low and middle-income urban residents in the process of urbanization and to alleviate the housing inequality caused by market-based housing reforms. It is a recognized fact that the living conditions of the insured have been improved while constructing the subsidized housing. However, the choice of affordable housing is mostly in the suburbs, where the surrounding urban functions and infrastructure are incomplete, resulting in the spatial mismatch of "jobs-housing" in affordable housing. The main reason for this problem is that the residents of affordable housing are more sensitive to the spatial location of their residence, but their selectivity and controllability to the housing location are relatively weak, which leads to higher commuting costs. Their real cost of living has not been effectively reduced. In this regard, 92 subsidized housing communities in Nanjing, China, are selected as the research sample in this paper. The residents of the affordable housing and their commuting Spatio-temporal behavior characteristics are identified based on the LBS (location-based service) data. Based on the spatial mismatch theory, spatial mismatch indicators such as commuting distance and commuting time are established to measure the spatial mismatch degree of subsidized housing in different districts of Nanjing. Furthermore, the geographically weighted regression model is used to analyze the influencing factors of the spatial mismatch of affordable housing in terms of the provision of employment opportunities, traffic accessibility and supporting service facilities by using spatial, functional and other multi-source Spatio-temporal big data. The results show that the spatial mismatch of affordable housing in Nanjing generally presents a "concentric circle" pattern of decreasing from the central urban area to the periphery. The factors affecting the spatial mismatch of affordable housing in different spatial zones are different. The main reasons are the number of enterprises within 1 km of the affordable housing district and the shortest distance to the subway station. And the low spatial mismatch is due to the diversity of services and facilities. Based on this, a spatial optimization strategy for different levels of spatial mismatch in subsidized housing is proposed. And feasible suggestions for the later site selection of subsidized housing are also provided. It hopes to avoid or mitigate the impact of "spatial mismatch," promote the "spatial adaptation" of "jobs-housing," and truly improve the overall welfare level of affordable housing residents.Keywords: affordable housing, spatial mismatch, commuting characteristics, spatial adaptation, welfare benefits
Procedia PDF Downloads 1061209 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia
Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi
Abstract:
Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation
Procedia PDF Downloads 891208 Impact of Land-Use and Climate Change on the Population Structure and Distribution Range of the Rare and Endangered Dracaena ombet and Dobera glabra in Northern Ethiopia
Authors: Emiru Birhane, Tesfay Gidey, Haftu Abrha, Abrha Brhan, Amanuel Zenebe, Girmay Gebresamuel, Florent Noulèkoun
Abstract:
Dracaena ombet and Dobera glabra are two of the most rare and endangered tree species in dryland areas. Unfortunately, their sustainability is being compromised by different anthropogenic and natural factors. However, the impacts of ongoing land use and climate change on the population structure and distribution of the species are less explored. This study was carried out in the grazing lands and hillside areas of the Desa'a dry Afromontane forest, northern Ethiopia, to characterize the population structure of the species and predict the impact of climate change on their potential distributions. In each land-use type, abundance, diameter at breast height, and height of the trees were collected using 70 sampling plots distributed over seven transects spaced one km apart. The geographic coordinates of each individual tree were also recorded. The results showed that the species populations were characterized by low abundance and unstable population structure. The latter was evinced by a lack of seedlings and mature trees. The study also revealed that the total abundance and dendrometric traits of the trees were significantly different between the two land uses. The hillside areas had a denser abundance of bigger and taller trees than the grazing lands. Climate change predictions using the MaxEnt model highlighted that future temperature increases coupled with reduced precipitation would lead to significant reductions in the suitable habitats of the species in northern Ethiopia. The species' suitable habitats were predicted to decline by 48–83% for D. ombet and 35–87% for D. glabra. Hence, to sustain the species populations, different strategies should be adopted, namely the introduction of alternative livelihoods (e.g., gathering NTFP) to reduce the overexploitation of the species for subsistence income and the protection of the current habitats that will remain suitable in the future using community-based exclosures. Additionally, the preservation of the species' seeds in gene banks is crucial to ensure their long-term conservation.Keywords: grazing lands, hillside areas, land-use change, MaxEnt, range limitation, rare and endangered tree species
Procedia PDF Downloads 941207 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 1161206 Estimation of Small Hydropower Potential Using Remote Sensing and GIS Techniques in Pakistan
Authors: Malik Abid Hussain Khokhar, Muhammad Naveed Tahir, Muhammad Amin
Abstract:
Energy demand has been increased manifold due to increasing population, urban sprawl and rapid socio-economic improvements. Low water capacity in dams for continuation of hydrological power, land cover and land use are the key parameters which are creating problems for more energy production. Overall installed hydropower capacity of Pakistan is more than 35000 MW whereas Pakistan is producing up to 17000 MW and the requirement is more than 22000 that is resulting shortfall of 5000 - 7000 MW. Therefore, there is a dire need to develop small hydropower to fulfill the up-coming requirements. In this regards, excessive rainfall, snow nurtured fast flowing perennial tributaries and streams in northern mountain regions of Pakistan offer a gigantic scope of hydropower potential throughout the year. Rivers flowing in KP (Khyber Pakhtunkhwa) province, GB (Gilgit Baltistan) and AJK (Azad Jammu & Kashmir) possess sufficient water availability for rapid energy growth. In the backdrop of such scenario, small hydropower plants are believed very suitable measures for more green environment and power sustainable option for the development of such regions. Aim of this study is to estimate hydropower potential sites for small hydropower plants and stream distribution as per steam network available in the available basins in the study area. The proposed methodology will focus on features to meet the objectives i.e. site selection of maximum hydropower potential for hydroelectric generation using well emerging GIS tool SWAT as hydrological run-off model on the Neelum, Kunhar and the Dor Rivers’ basins. For validation of the results, NDWI will be computed to show water concentration in the study area while overlaying on geospatial enhanced DEM. This study will represent analysis of basins, watershed, stream links, and flow directions with slope elevation for hydropower potential to produce increasing demand of electricity by installing small hydropower stations. Later on, this study will be benefitted for other adjacent regions for further estimation of site selection for installation of such small power plants as well.Keywords: energy, stream network, basins, SWAT, evapotranspiration
Procedia PDF Downloads 2201205 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions
Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos
Abstract:
The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon
Procedia PDF Downloads 1411204 Clinico-pathological Study of Xeroderma Pigmentosa: A Case Series of Eight Cases
Authors: Kakali Roy, Sahana P. Raju, Subhra Dhar, Sandipan Dhar
Abstract:
Introduction: Xeroderma pigmentosa (XP) is a rare inherited (autosomal recessive) disease resulting from impairment in DNA repair that involves recognition and repair of ultraviolet radiation (UVR) induced DNA damage in the nucleotide excision repair pathway. Which results in increased photosensitivity, UVR induced damage to skin and eye, increased susceptibility of skin and ocular cancer, and progressive neurodegeneration in some patients. XP is present worldwide, with higher incidence in areas having frequent consanguinity. Being extremely rare, there is limited literature on XP and associated complications. Here, the clinico-pathological experience (spectrum of clinical presentation, histopathological findings of malignant skin lesions, and progression) of managing 8 cases of XP is presented. Methodology: A retrospective study was conducted in a pediatric tertiary care hospital in eastern India during a ten-year period from 2013 to 2022. A clinical diagnosis was made based on severe sun burn or premature photo-aging and/or onset of cutaneous malignancies at early age (1st decade) in background of consanguinity and autosomal recessive inheritance pattern in family. Results: The mean age of presentation was 1.2 years (range of 7month-3years), while three children presented during their infancy. Male to female ratio was 5:3, and all were born of consanguineous marriage. They presented with dermatological manifestations (100%) followed by ophthalmic (75%) and/or neurological symptoms (25%). Patients had normal skin at birth but soon developed extreme sensitivity to UVR in the form of exaggerated sun tanning, burning, and blistering on minimal sun exposure, followed by abnormal skin pigmentation like freckles and lentiginosis. Subsequently, over time there was progressive xerosis, atrophy, wrinkling, and poikiloderma. Six patients had varied degree of ocular involvement, while three of them had severe manifestation, including madarosis, tylosis, ectropion, Lagopthalmos, Pthysis bulbi, clouding and scarring of the cornea with complete or partial loss of vision, and ophthalmic malignancies. 50% (n=4) cases had skin and ocular pre-malignant (actinic keratosis) and malignant lesions, including melanoma and non melanoma skin cancer (NMSC) like squamous cell carcinoma (SCC) and basal cell carcinoma (BCC) in their early childhood. One patient had simultaneous occurrence of multiple malignancies together (SCC, BCC, and melanoma). Subnormal intelligence was noticed as neurological feature, and none had sensory neural hearing loss, microcephaly, neuroregression, or neurdeficit. All the patients had been being managed by a multidisciplinary team of pediatricians, dermatologists, ophthalmologists, neurologists and psychiatrists. Conclusion: Although till date there is no complete cure for XP and the disease is ultimately fatal. But increased awareness, early diagnosis followed by persistent vigorous protection from UVR, and regular screening for early detection of malignancies along with psychological support can drastically improve patients’ quality of life and life expectancy. Further research is required on formulating optimal management of XP, specifically the role and possibilities of gene therapy in XP.Keywords: childhood malignancies, dermato-pathological findings, eastern India, Xeroderma pigmentosa
Procedia PDF Downloads 751203 Regulatory and Economic Challenges of AI Integration in Cyber Insurance
Authors: Shreyas Kumar, Mili Shangari
Abstract:
Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware
Procedia PDF Downloads 321202 Utilization of Silk Waste as Fishmeal Replacement: Growth Performance of Cyprinus carpio Juveniles Fed with Bombyx mori Pupae
Authors: Goksen Capar, Levent Dogankaya
Abstract:
According to the circular economy model, resource productivity should be maximized and wastes should be reduced. Since earth’s natural resources are continuously depleted, resource recovery has gained great interest in recent years. As part of our research study on the recovery and reuse of silk wastes, this paper focuses on the utilization of silkworm pupae as fishmeal replacement, which would replace the original fishmeal raw material, namely the fish itself. This, in turn, would contribute to sustainable management of wild fish resources. Silk fibre is secreted by the silkworm Bombyx mori in order to construct a 'room' for itself during its transformation process from pupae to an adult moth. When the cocoons are boiled in hot water, silk fibre becomes loose and the silk yarn is produced by combining thin silk fibres. The remaining wastes are 1) sericin protein, which is dissolved in water, 2) remaining part of cocoon, including the dead body of B. mori pupae. In this study, an eight weeks trial was carried out to determine the growth performance of common carp juveniles fed with waste silkworm pupae meal (SWPM) as a replacement for fishmeal (FM). Four isonitrogenous diets (40% CP) were prepared replacing 0%, 33%, 50%, and 100% of the dietary FM with non-defatted silkworm pupae meal as a dietary protein source for experiments in C. carpio. Triplicate groups comprising of 20 fish (0.92±0.29 g) were fed twice/day with one of the four diets. Over a period of 8 weeks, results showed that the diet containing 50% of its protein from SWPM had significantly higher (p ≤ 0.05) growth rates in all groups. The increasing levels of SWPM were resulted in a decrease in growth performance and significantly lower growth (p ≤ 0.05) was observed with diets having 100% SWPM. The study demonstrates that it is practical to replace 50% of the FM protein with SWPM with a significantly better utilization of the diet but higher SWPM levels are not recommended for juvenile carp. Further experiments are under study to have more detailed results on the possible effects of this alternative diet on the growth performance of juvenile carp.Keywords: Bombyx mori, Cyprinus carpio, fish meal, silk, waste pupae
Procedia PDF Downloads 1581201 Braille Code Matrix
Authors: Mohammed E. A. Brixi Nigassa, Nassima Labdelli, Ahmed Slami, Arnaud Pothier, Sofiane Soulimane
Abstract:
According to the world health organization (WHO), there are almost 285 million people with visual disability, 39 million of these people are blind. Nevertheless, there is a code for these people that make their life easier and allow them to access information more easily; this code is the Braille code. There are several commercial devices allowing braille reading, unfortunately, most of these devices are not ergonomic and too expensive. Moreover, we know that 90 % of blind people in the world live in low-incomes countries. Our contribution aim is to concept an original microactuator for Braille reading, as well as being ergonomic, inexpensive and lowest possible energy consumption. Nowadays, the piezoelectric device gives the better actuation for low actuation voltage. In this study, we focus on piezoelectric (PZT) material which can bring together all these conditions. Here, we propose to use one matrix composed by six actuators to form the 63 basic combinations of the Braille code that contain letters, numbers, and special characters in compliance with the standards of the braille code. In this work, we use a finite element model with Comsol Multiphysics software for designing and modeling this type of miniature actuator in order to integrate it into a test device. To define the geometry and the design of our actuator, we used physiological limits of perception of human being. Our results demonstrate in our study that piezoelectric actuator could bring a large deflection out-of-plain. Also, we show that microactuators can exhibit non uniform compression. This deformation depends on thin film thickness and the design of membrane arm. The actuator composed of four arms gives the higher deflexion and it always gives a domed deformation at the center of the deviceas in case of the Braille system. The maximal deflection can be estimated around ten micron per Volt (~ 10µm/V). We noticed that the deflection according to the voltage is a linear function, and this deflection not depends only on the voltage the voltage, but also depends on the thickness of the film used and the design of the anchoring arm. Then, we were able to simulate the behavior of the entire matrix and thus display different characters in Braille code. We used these simulations results to achieve our demonstrator. This demonstrator is composed of a layer of PDMS on which we put our piezoelectric material, and then added another layer of PDMS to isolate our actuator. In this contribution, we compare our results to optimize the final demonstrator.Keywords: Braille code, comsol software, microactuators, piezoelectric
Procedia PDF Downloads 3531200 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 1601199 Inappropriate Prescribing Defined by START and STOPP Criteria and Its Association with Adverse Drug Events among Older Hospitalized Patients
Authors: Mohd Taufiq bin Azmy, Yahaya Hassan, Shubashini Gnanasan, Loganathan Fahrni
Abstract:
Inappropriate prescribing in older patients has been associated with resource utilization and adverse drug events (ADE) such as hospitalization, morbidity and mortality. Globally, there is a lack of published data on ADE induced by inappropriate prescribing. Our study is specific to an older population and is aimed at identifying risk factors for ADE and to develop a model that will link ADE to inappropriate prescribing. The design of the study was prospective whereby computerized medical records of 302 hospitalized elderly aged 65 years and above in 3 public hospitals in Malaysia (Hospital Serdang, Hospital Selayang and Hospital Sungai Buloh) were studied over a 7 month period from September 2013 until March 2014. Potentially inappropriate medications and potential prescribing omissions were determined using the published and validated START-STOPP criteria. Patients who had at least one inappropriate medication were included in Phase II of the study where ADE were identified by local expert consensus panel based on the published and validated Naranjo ADR probability scale. The panel also assessed whether ADE were causal or contributory to current hospitalization. The association between inappropriate prescribing and ADE (hospitalization, mortality and adverse drug reactions) was determined by identifying whether or not the former was causal or contributory to the latter. Rate of ADE avoidability was also determined. Our findings revealed that the prevalence of potential inappropriate prescribing was 58.6%. A total of ADEs were detected in 31 of 105 patients (29.5%) when STOPP criteria were used to identify potentially inappropriate medication; All of the 31 ADE (100%) were considered causal or contributory to admission. Of the 31 ADEs, 28 (90.3%) were considered avoidable or potentially avoidable. After adjusting for age, sex, comorbidity, dementia, baseline activities of daily living function, and number of medications, the likelihood of a serious avoidable ADE increased significantly when a potentially inappropriate medication was prescribed (odds ratio, 11.18; 95% confidence interval [CI], 5.014 - 24.93; p < .001). The medications identified by STOPP criteria, are significantly associated with avoidable ADE in older people that cause or contribute to urgent hospitalization but contributed less towards morbidity and mortality. Findings of the study underscore the importance of preventing inappropriate prescribing.Keywords: adverse drug events, appropriate prescribing, health services research
Procedia PDF Downloads 3981198 Older Consumer’s Willingness to Trust Social Media Advertising: A Case of Australian Social Media Users
Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant
Abstract:
Social media networks have become the hotbed for advertising activities due mainly to their increasing consumer/user base and, secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional media, such as broadcast media and print media, and, more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilized as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: (1) Gen Z/Millennials Reliability = 4.90/7 vs. Gen X/Boomers Reliability = 4.34/7; (2) Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and (3) Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioral intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users in an attempt to foster positive behavioral responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.Keywords: social media advertising, trust, older consumers, internet studies
Procedia PDF Downloads 361197 Assessment of Seeding and Weeding Field Robot Performance
Authors: Victor Bloch, Eerikki Kaila, Reetta Palva
Abstract:
Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.Keywords: agricultural robot, field robot, plant detection, robot performance
Procedia PDF Downloads 841196 Designing Self-Healing Lubricant-Impregnated Surfaces for Corrosion Protection
Authors: Sami Khan, Kripa Varanasi
Abstract:
Corrosion is a widespread problem in several industries and developing surfaces that resist corrosion has been an area of interest since the last several decades. Superhydrophobic surfaces that combine hydrophobic coatings along with surface texture have been shown to improve corrosion resistance by creating voids filled with air that minimize the contact area between the corrosive liquid and the solid surface. However, these air voids can incorporate corrosive liquids over time, and any mechanical faults such as cracks can compromise the coating and provide pathways for corrosion. As such, there is a need for self-healing corrosion-resistance surfaces. In this work, the anti-corrosion properties of textured surfaces impregnated with a lubricant have been systematically studied. Since corrosion resistance depends on the area and physico-chemical properties of the material exposed to the corrosive medium, lubricant-impregnated surfaces (LIS) have been designed based on the surface tension, viscosity and chemistry of the lubricant and its spreading coefficient on the solid. All corrosion experiments were performed in a standard three-electrode cell using iron, which readily corrodes in a 3.5% sodium chloride solution. In order to obtain textured iron surfaces, thin films (~500 nm) of iron were sputter-coated on silicon wafers textured using photolithography, and subsequently impregnated with lubricants. Results show that the corrosion rate on LIS is greatly reduced, and offers an over hundred-fold improvement in corrosion protection. Furthermore, it is found that the spreading characteristics of the lubricant are significant in ensuring corrosion protection: a spreading lubricant (e.g., Krytox 1506) that covers both inside the texture, as well as the top of the texture, provides a two-fold improvement in corrosion protection as compared to a non-spreading lubricant (e.g., Silicone oil) that does not cover texture tops. To enhance corrosion protection of surfaces coated with a non-spreading lubricant, pyramid-shaped textures have been developed that minimize exposure to the corrosive solution, and a consequent twenty-fold increased in corrosion protection is observed. An increase in viscosity of the lubricant scales with greater corrosion protection. Finally, an equivalent cell-circuit model is developed for the lubricant-impregnated systems using electrochemical impedance spectroscopy. Lubricant-impregnated surfaces find attractive applications in harsh corrosive environments, especially where the ability to self-heal is advantageous.Keywords: lubricant-impregnated surfaces, self-healing surfaces, wettability, nano-engineered surfaces
Procedia PDF Downloads 134