Search results for: wind park model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17676

Search results for: wind park model

2766 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City

Authors: Christian Kapuku, Seung-Young Kho

Abstract:

An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.

Keywords: geographic information system (GIS), network construction, transportation database, open source data

Procedia PDF Downloads 138
2765 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect

Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar

Abstract:

In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.

Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect

Procedia PDF Downloads 236
2764 The Effect of Corporate Governance on Financial Stability and Solvency Margin for Insurance Companies in Jordan

Authors: Ghadeer A.Al-Jabaree, Husam Aldeen Al-Khadash, M. Nassar

Abstract:

This study aimed at investigating the effect of well-designed corporate governance system on the financial stability of insurance companies listed in ASE. Further, this study provides a comprehensive model for evaluating and analyzing insurance companies' financial position and prospective for comparing the degree of corporate governance application provisions among Jordanian insurance companies. In order to achieve the goals of the study, a whole population that consist of (27) listed insurance companies was introduced through the variables of (board of director, audit committee, internal and external auditor, board and management ownership and block holder's identities). Statistical methods were used with alternative techniques by (SPSS); where descriptive statistical techniques such as means, standard deviations were used to describe the variables, while (F) test and ANOVA analysis of variance were used to test the hypotheses of the study. The study revealed the existence of significant effect of corporate governance variables except local companies that are not listed in ASE on financial stability within control variables especially debt ratio (leverage),where it's also showed that concentration in motor third party doesn't have significant effect on insurance companies' financial stability during study period. Moreover, the study concludes that Global financial crisis affect the investment side of insurance companies with insignificant effect on the technical side. Finally, some recommendations were presented such as enhancing the laws and regulation that help the appropriate application of corporate governance, and work on activating the transparency in the disclosures of the financial statements and focusing on supporting the technical provisions for the companies, rather than focusing only on profit side.

Keywords: corporate governance, financial stability and solvency margin, insurance companies, Jordan

Procedia PDF Downloads 462
2763 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers

Authors: Onur Kaya, Halit Bayer

Abstract:

Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.

Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing

Procedia PDF Downloads 223
2762 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements

Authors: Andrey Kupriyanov

Abstract:

In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.

Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)

Procedia PDF Downloads 153
2761 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 105
2760 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific

Authors: Giuseppe Timperio, Robert De Souza

Abstract:

The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.

Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience

Procedia PDF Downloads 153
2759 A Context Aware Mobile Learning System with a Cognitive Recommendation Engine

Authors: Jalal Maqbool, Gyu Myoung Lee

Abstract:

Using smart devices for context aware mobile learning is becoming increasingly popular. This has led to mobile learning technology becoming an indispensable part of today’s learning environment and platforms. However, some fundamental issues remain - namely, mobile learning still lacks the ability to truly understand human reaction and user behaviour. This is due to the fact that current mobile learning systems are passive and not aware of learners’ changing contextual situations. They rely on static information about mobile learners. In addition, current mobile learning platforms lack the capability to incorporate dynamic contextual situations into learners’ preferences. Thus, this thesis aims to address these issues highlighted by designing a context aware framework which is able to sense learner’s contextual situations, handle data dynamically, and which can use contextual information to suggest bespoke learning content according to a learner’s preferences. This is to be underpinned by a robust recommendation system, which has the capability to perform these functions, thus providing learners with a truly context-aware mobile learning experience, delivering learning contents using smart devices and adapting to learning preferences as and when it is required. In addition, part of designing an algorithm for the recommendation engine has to be based on learner and application needs, personal characteristics and circumstances, as well as being able to comprehend human cognitive processes which would enable the technology to interact effectively and deliver mobile learning content which is relevant, according to the learner’s contextual situations. The concept of this proposed project is to provide a new method of smart learning, based on a capable recommendation engine for providing an intuitive mobile learning model based on learner actions.

Keywords: aware, context, learning, mobile

Procedia PDF Downloads 208
2758 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations

Authors: Karthikeyan Kalirajan, Ashok Joshi

Abstract:

An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.

Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection

Procedia PDF Downloads 396
2757 Polymeric Microspheres for Bone Tissue Engineering

Authors: Yamina Boukari, Nashiru Billa, Andrew Morris, Stephen Doughty, Kevin Shakesheff

Abstract:

Poly (lactic-co-glycolic) acid (PLGA) is a synthetic polymer that can be used in bone tissue engineering with the aim of creating a scaffold in order to support the growth of cells. The formation of microspheres from this polymer is an attractive strategy that would allow for the development of an injectable system, hence avoiding invasive surgical procedures. The aim of this study was to develop a microsphere delivery system for use as an injectable scaffold in bone tissue engineering and evaluate various formulation parameters on its properties. Porous and lysozyme-containing PLGA microspheres were prepared using the double emulsion solvent evaporation method from various molecular weights (MW). Scaffolds were formed by sintering to contain 1 -3mg of lysozyme per gram of scaffold. The mechanical and physical properties of the scaffolds were assessed along with the release of lysozyme, which was used as a model protein. The MW of PLGA was found to have an influence on microsphere size during fabrication, with increased MW leading to an increased microsphere diameter. An inversely proportional relationship was displayed between PLGA MW and mechanical strength of formed scaffolds across loadings for low, intermediate and high MW respectively. Lysozyme release from both microspheres and formed scaffolds showed an initial burst release phase, with both microspheres and scaffolds fabricated using high MW PLGA showing the lowest protein release. Following the initial burst phase, the profiles for each MW followed a similar slow release over 30 days. Overall, the results of this study demonstrate that lysozyme can be successfully incorporated into porous PLGA scaffolds and released over 30 days in vitro, and that varying the MW of the PLGA can be used as a method of altering the physical properties of the resulting scaffolds.

Keywords: bone, microspheres, PLGA, tissue engineering

Procedia PDF Downloads 407
2756 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms

Authors: Bliss Singhal

Abstract:

Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.

Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression

Procedia PDF Downloads 50
2755 More Precise: Patient-Reported Outcomes after Stroke

Authors: Amber Elyse Corrigan, Alexander Smith, Anna Pennington, Ben Carter, Jonathan Hewitt

Abstract:

Background and Purpose: Morbidity secondary to stroke is highly heterogeneous, but it is important to both patients and clinicians in post-stroke management and adjustment to life after stroke. The consideration of post-stroke morbidity clinically and from the patient perspective has been poorly measured. The patient-reported outcome measures (PROs) in morbidity assessment help improve this knowledge gap. The primary aim of this study was to consider the association between PRO outcomes and stroke predictors. Methods: A multicenter prospective cohort study assessed 549 stroke patients at 19 hospital sites across England and Wales during 2019. Following a stroke event, demographic, clinical, and PRO measures were collected. Prevalence of morbidity within PRO measures was calculated with associated 95% confidence intervals. Predictors of domain outcome were calculated using a multilevel generalized linear model. Associated P -values and 95% confidence intervals are reported. Results: Data were collected from 549 participants, 317 men (57.7%) and 232 women (42.3%) with ages ranging from 25 to 97 (mean 72.7). PRO morbidity was high post-stroke; 93.2% of the cohort report post-stroke PRO morbidity. Previous stroke, diabetes, and gender are associated with worse patient-reported outcomes across both the physical and cognitive domains. Conclusions: This large-scale multicenter cohort study illustrates the high proportion of morbidity in PRO measures. Further, we demonstrate key predictors of adverse outcomes (Diabetes, previous stroke, and gender) congruence with clinical predictors. The PRO has been demonstrated to be an informative and useful stroke when considering patient-reported outcomes and has wider implications for considerations of PROs in clinical management. Future longitudinal follow-up with PROs is needed to consider association of long-term morbidity.

Keywords: morbidity, patient-reported outcome, PRO, stroke

Procedia PDF Downloads 103
2754 Application of the Urban Forest Credit Standard as a Tool for Compensating CO2 Emissions in the Metalworking Industry: A Case Study in Brazil

Authors: Marie Madeleine Sarzi Inacio, Ligiane Carolina Leite Dauzacker, Rodrigo Henriques Lopes Da Silva

Abstract:

The climate changes resulting from human activity have increased interest in more sustainable production practices to reduce and offset pollutant emissions. Brazil, with its vast areas capable of carbon absorption, holds a significant advantage in this context. However, to optimize the country's sustainable potential, it is important to establish a robust carbon market with clear rules for the eligibility and validation of projects aimed at reducing and offsetting Greenhouse Gas (GHG) emissions. In this study, our objective is to conduct a feasibility analysis through a case study to evaluate the implementation of an urban forest credits standard in Brazil, using the Urban Forest Credits (UFC) model implemented in the United States as a reference. Thus, the city of Ribeirão Preto, located in Brazil, was selected to assess the availability of green areas. With the CO2 emissions value from the metalworking industry, it was possible to analyze information in the case study, considering the activity. The QGIS software was used to map potential urban forest areas, which can connect to various types of geospatial databases. Although the chosen municipality has little vegetative coverage, the mapping identified at least eight areas that fit the standard definitions within the delimited urban perimeter. The outlook was positive, and the implementation of projects like Urban Forest Credits (UFC) adapted to the Brazilian reality has great potential to benefit the country in the carbon market and contribute to achieving its Greenhouse Gas (GHG) emission reduction goals.

Keywords: carbon neutrality, metalworking industry, carbon credits, urban forestry credits

Procedia PDF Downloads 49
2753 Use of Triclosan-Coated Sutures Led to Cost Saving in Public and Private Setting in India across Five Surgical Categories: An Economical Model Assessment

Authors: Anish Desai, Reshmi Pillai, Nilesh Mahajan, Hitesh Chopra, Vishal Mahajan, Ajay Grover, Ashish Kohli

Abstract:

Surgical Site Infection (SSI) is hospital acquired infection of growing concern. This study presents the efficacy and cost-effectiveness of triclosan-coated suture, in reducing the burden of SSI in India. Methodology: A systematic literature search was conducted for economic burden (1998-2018) of SSI and efficacy of triclosan-coated sutures (TCS) vs. non-coated sutures (NCS) (2000-2018). PubMed Medline and EMBASE indexed articles were searched using Mesh terms or Emtree. Decision tree analysis was used to calculate, the cost difference between TCS and NCS at private and public hospitals, respectively for 7 surgical procedures. Results: The SSI range from low to high for Caesarean section (C-section), Laparoscopic hysterectomy (L-hysterectomy), Open Hernia (O-Hernia), Laparoscopic Cholecystectomy (L-Cholecystectomy), Coronary artery bypass graft (CABG), Total knee replacement (TKR), and Mastectomy were (3.77 to 24.2%), (2.28 to 11.7%), (1.75 to 60%), (1.71 to 25.58%), (1.6 to 18.86%), (1.74 to 12.5%), and (5.56 to 25%), respectively. The incremental cost (%) of TCS ranged 0.1%-0.01% in private and from 0.9%-0.09% at public hospitals across all surgical procedures. Cost savings at median efficacy & SSI risk was 6.52%, 5.07 %, 11.39%, 9.63%, 3.62%, 2.71%, 9.41% for C-section, L-hysterectomy, O-Hernia, L-Cholecystectomy, CABG, TKR, and Mastectomy in private and 8.79%, 4.99%, 12.67%, 10.58%, 3.32%, 2.35%, 11.83% in public hospital, respectively. Efficacy of TCS and SSI incidence in a particular surgical procedure were important determinants of cost savings using one-way sensitivity analysis. Conclusion: TCS suture led to cost savings across all 7 surgeries in both private and public hospitals in India.

Keywords: cost Savings, non-coated sutures, surgical site infection, triclosan-coated sutures

Procedia PDF Downloads 368
2752 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline

Authors: Kenan Morani, Esra Kaya Ayana

Abstract:

This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.

Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation

Procedia PDF Downloads 99
2751 African Women in Power: An Analysis of the Representation of Nigerian Business Women in Television

Authors: Ifeanyichukwu Valerie Oguafor

Abstract:

Women generally have been categorized and placed under the chain of business industry, sometimes highly regarded and other times merely. The social construction of womanhood does not in all sense support a woman going into business, let alone succeed in it because it is believed that it a man’s world. In a typical patriarchal setting, a woman is expected to know nothing more domestic roles. For some women, this is not the case as they have been able to break these barriers to excel in business amidst these social setting and stereotypes. This study examines media representation of Nigerians business women, using content analysis of TV interviews as media text, framing analysis as an approach in qualitative methodology, The study further aims to analyse media frames of two Nigerian business women: FolorunshoAlakija, a business woman in the petroleum industry with current net worth 1.1 billion U.S dollars, emerging as the richest black women in the world 2014. MosunmolaAbudu, a media magnate in Nigeria who launched the first Africa’s global black entertainment and lifestyle network in 2013. This study used six predefined frames: the business woman, the myth of business women, the non-traditional woman, women in leading roles, the family woman, the religious woman, and the philanthropist woman to analyse the representation of Nigerian business women in the media. The analysis of the aforementioned frames on TV interviews with these women reveals that the media perpetually reproduces existing gender stereotype and do not challenge patriarchy. Women face challenges in trying to succeed in business while trying to keep their homes stable. This study concludes that the media represent and reproduce gender stereotypes in spite of the expectation of empowering women. The media reduces these women’s success insignificant rather than a role model for women in society.

Keywords: representation of business women in the media, business women in Nigeria, framing in the media, patriarchy, women's subordination

Procedia PDF Downloads 136
2750 The Three-dimensional Response of Mussel Plaque Anchoring to Wet Substrates under Directional Tensions

Authors: Yingwei Hou, Tao Liu, Yong Pang

Abstract:

The paper explored the three-dimensional deformation of mussel plaques anchor to wet polydimethylsiloxane (PDMS) substrates under tension stress with different angles. Mussel plaques exhibiting natural adhesive structures, have attracted significant attention for their remarkable adhesion properties. Understanding their behavior under mechanical stress, particularly in a three-dimensional context, holds immense relevance for biomimetic material design and bio-inspired adhesive development. This study employed a novel approach to investigate the 3D deformation of the PDMS substrates anchored by mussel plaques subjected to controlled tension. Utilizing our customized stereo digital image correlation technique and mechanical mechanics analyses, we found the distributions of the displacement and resultant force on the substrate became concentrated under the plaque. Adhesion and sucking mechanisms were analyzed for the mussel plaque-substrate system under tension until detachment. The experimental findings were compared with a developed model using finite element analysis and the results provide new insights into mussels’ attachment mechanism. This research not only contributes to the fundamental understanding of biological adhesion but also holds promising implications for the design of innovative adhesive materials with applications in fields such as medical adhesives, underwater technologies, and industrial bonding. The comprehensive exploration of mussel plaque behavior in three dimensions is important for advancements in biomimicry and materials science, fostering the development of adhesives that emulate nature's efficiency.

Keywords: adhesion mechanism, mytilus edulis, mussel plaque, stereo digital image correlation

Procedia PDF Downloads 26
2749 Nurse Practitioner Led Pediatric Primary Care Clinic in a Tertiary Care Setting: Improving Access and Health Outcomes

Authors: Minna K. Miller, Chantel. E. Canessa, Suzanna V. McRae, Susan Shumay, Alissa Collingridge

Abstract:

Primary care provides the first point of contact and access to health care services. For the pediatric population, the goal is to help healthy children stay healthy and to help those that are sick get better. Primary care facilitates regular well baby/child visits; health promotion and disease prevention; investigation, diagnosis and management of acute and chronic illnesses; health education; both consultation and collaboration with, and referral to other health care professionals. There is a protective association between regular well-child visit care and preventable hospitalization. Further, low adherence to well-child care and poor continuity of care are independently associated with increased risk of hospitalization. With a declining number of family physicians caring for children, and only a portion of pediatricians providing primary care services, it is becoming increasingly difficult for children and their families to access primary care. Nurse practitioners are in a unique position to improve access to primary care and improve health outcomes for children. Limited literature is available on the nurse practitioner role in primary care pediatrics. The purpose of this paper is to describe the development, implementation and evaluation of a Nurse Practitioner-led pediatric primary care clinic in a tertiary care setting. Utilizing the participatory, evidence-based, patient-focused process for advanced practice nursing (PEPPA framework), this paper highlights the results of the initial needs assessment/gap analysis, the new service delivery model, populations served, and outcome measures.

Keywords: access, health outcomes, nurse practitioner, pediatric primary care, PEPPA framework

Procedia PDF Downloads 463
2748 The Impact of Pediatric Cares, Infections and Vaccines on Community and People’s Lives

Authors: Nashed Atef Nashed Farag

Abstract:

Introduction: Reporting adverse events following vaccination remains a challenge. WHO has mandated pharmacovigilance centers around the world to submit Adverse Events Following Immunization (AEFI) reports from different countries to a large electronic database of adverse drug event data called Vigibase. Despite sufficient information about AEFIs on Vigibase, they are not available to the general public. However, the WHO has an alternative website called VigiAccess, an open-access website that serves as an archive for reported adverse reactions and AEFIs. The aim of the study was to establish a reporting model for a number of commonly used vaccines in the VigiAccess system. Methods: On February 5, 2018, VigiAccess comprehensively searched for ESSI reports on the measles vaccine, oral polio vaccine (OPV), yellow fever vaccine, pneumococcal vaccine, rotavirus vaccine, meningococcal vaccine, tetanus vaccine, and tuberculosis vaccine (BCG). These are reports from all pharmacovigilance centers around the world since they joined the WHO Drug Monitoring Program. Results: After an extensive search, VigiAccess found 9,062 AEFIs from the measles vaccine, 185,829 AEFIs from the OPV vaccine, 24,577 AEFIs from the yellow fever vaccine, 317,208 AEFIs from the pneumococcal vaccine, 73,513 AEFIs from the rotavirus vaccine, and 145,447 AEFIs from meningococcal cal vaccine, 22,781 EI FI vaccines against tetanus and 35,556 BCG vaccines against AEFI. Conclusion: The study found that among the eight vaccines examined, pneumococcal vaccines were associated with the highest number of AEFIs, while measles vaccines were associated with the fewest AEFIs.

Keywords: surgical approach, anatomical approach, decompression, axillary nerve, quadrangular space adverse events following immunization, cameroon, COVID-19 vaccines, nOPV, ODK vaccines, adverse reactions, VigiAccess, adverse event reporting

Procedia PDF Downloads 25
2747 Geostatistical Simulation of Carcinogenic Industrial Effluent on the Irrigated Soil and Groundwater, District Sheikhupura, Pakistan

Authors: Asma Shaheen, Javed Iqbal

Abstract:

The water resources are depleting due to an intrusion of industrial pollution. There are clusters of industries including leather tanning, textiles, batteries, and chemical causing contamination. These industries use bulk quantity of water and discharge it with toxic effluents. The penetration of heavy metals through irrigation from industrial effluent has toxic effect on soil and groundwater. There was strong positive significant correlation between all the heavy metals in three media of industrial effluent, soil and groundwater (P < 0.001). The metal to the metal association was supported by dendrograms using cluster analysis. The geospatial variability was assessed by using geographically weighted regression (GWR) and pollution model to identify the simulation of carcinogenic elements in soil and groundwater. The principal component analysis identified the metals source, 48.8% variation in factor 1 have significant loading for sodium (Na), calcium (Ca), magnesium (Mg), iron (Fe), chromium (Cr), nickel (Ni), lead (Pb) and zinc (Zn) of tannery effluent-based process. In soil and groundwater, the metals have significant loading in factor 1 representing more than half of the total variation with 51.3 % and 53.6 % respectively which showed that pollutants in soil and water were driven by industrial effluent. The cumulative eigen values for the three media were also found to be greater than 1 representing significant clustering of related heavy metals. The results showed that heavy metals from industrial processes are seeping up toxic trace metals in the soil and groundwater. The poisonous pollutants from heavy metals turned the fresh resources of groundwater into unusable water. The availability of fresh water for irrigation and domestic use is being alarming.

Keywords: groundwater, geostatistical, heavy metals, industrial effluent

Procedia PDF Downloads 206
2746 Development of a Geomechanical Risk Assessment Model for Underground Openings

Authors: Ali Mortazavi

Abstract:

The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).

Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering

Procedia PDF Downloads 114
2745 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach

Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan

Abstract:

Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.

Keywords: efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach

Procedia PDF Downloads 239
2744 The Optimization of the Parameters for Eco-Friendly Leaching of Precious Metals from Waste Catalyst

Authors: Silindile Gumede, Amir Hossein Mohammadi, Mbuyu Germain Ntunka

Abstract:

Goal 12 of the 17 Sustainable Development Goals (SDGs) encourages sustainable consumption and production patterns. This necessitates achieving the environmentally safe management of chemicals and all wastes throughout their life cycle and the proper disposal of pollutants and toxic waste. Fluid catalytic cracking (FCC) catalysts are widely used in the refinery to convert heavy feedstocks to lighter ones. During the refining processes, the catalysts are deactivated and discarded as hazardous toxic solid waste. Spent catalysts (SC) contain high-cost metal, and the recovery of metals from SCs is a tactical plan for supplying part of the demand for these substances and minimizing the environmental impacts. Leaching followed by solvent extraction, has been found to be the most efficient method to recover valuable metals with high purity from spent catalysts. However, the use of inorganic acids during the leaching process causes a secondary environmental issue. Therefore, it is necessary to explore other alternative efficient leaching agents that are economical and environmentally friendly. In this study, the waste catalyst was collected from a domestic refinery and was characterised using XRD, ICP, XRF, and SEM. Response surface methodology (RSM) and Box Behnken design were used to model and optimize the influence of some parameters affecting the acidic leaching process. The parameters selected in this investigation were the acid concentration, temperature, and leaching time. From the characterisation results, it was found that the spent catalyst consists of high concentrations of Vanadium (V) and Nickel (Ni); hence this study focuses on the leaching of Ni and V using a biodegradable acid to eliminate the formation of the secondary pollution.

Keywords: eco-friendly leaching, optimization, metal recovery, leaching

Procedia PDF Downloads 35
2743 A Rationale to Describe Ambident Reactivity

Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken

Abstract:

An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.

Keywords: ambident, Gibbs, nucleophile, rates

Procedia PDF Downloads 56
2742 Development and Evaluation of a Psychological Adjustment and Adaptation Status Scale for Breast Cancer Survivors

Authors: Jing Chen, Jun-E Liu, Peng Yue

Abstract:

Objective: The objective of this study was to develop a psychological adjustment and adaptation status scale for breast cancer survivors, and to examine the reliability and validity of the scale. Method: 37 breast cancer survivors were recruited in qualitative research; a five-subject theoretical framework and an item pool of 150 items of the scale were derived from the interview data. In order to evaluate and select items and reach a preliminary validity and reliability for the original scale, the suggestions of study group members, experts and breast cancer survivors were taken, and statistical methods were used step by step in a sample of 457 breast cancer survivors. Results: An original 24-item scale was developed. The five dimensions “domestic affections”, “interpersonal relationship”, “attitude of life”, “health awareness”, “self-control/self-efficacy” explained 58.053% of the total variance. The content validity was assessed by experts, the CVI was 0.92. The construct validity was examined in a sample of 264 breast cancer survivors. The fitting indexes of confirmatory factor analysis (CFA) showed good fitting of the five dimensions model. The criterion-related validity of the total scale with PTGI was satisfactory (r=0.564, p<0.001). The internal consistency reliability and test-retest reliability were tested. Cronbach’s alpha value (0.911) showed a good internal consistency reliability, and the intraclass correlation coefficient (ICC=0.925, p<0.001) showed a satisfactory test-retest reliability. Conclusions: The scale was brief and easy to understand, was suitable for breast cancer patients whose physical strength and energy were limited.

Keywords: breast cancer survivors, rehabilitation, psychological adaption and adjustment, development of scale

Procedia PDF Downloads 490
2741 Leveraging Mobile Apps for Citizen-Centric Urban Planning: Insights from Tajawob Implementation

Authors: Alae El Fahsi

Abstract:

This study explores the ‘Tajawob’ app's role in urban development, demonstrating how mobile applications can empower citizens and facilitate urban planning. Tajawob serves as a digital platform for community feedback, engagement, and participatory governance, addressing urban challenges through innovative tech solutions. This research synthesizes data from a variety of sources, including user feedback, engagement metrics, and interviews with city officials, to assess the app’s impact on citizen participation in urban development in Morocco. By integrating advanced data analytics and user experience design, Tajawob has bridged the communication gap between citizens and government officials, fostering a more collaborative and transparent urban planning process. The findings reveal a significant increase in civic engagement, with users actively contributing to urban management decisions, thereby enhancing the responsiveness and inclusivity of urban governance. Challenges such as digital literacy, infrastructure limitations, and privacy concerns are also discussed, providing a comprehensive overview of the obstacles and opportunities presented by mobile app-based citizen engagement platforms. The study concludes with strategic recommendations for scaling the Tajawob model to other contexts, emphasizing the importance of adaptive technology solutions in meeting the evolving needs of urban populations. This research contributes to the burgeoning field of smart city innovations, offering key insights into the role of digital tools in facilitating more democratic and participatory urban environments.

Keywords: smart cities, digital governance, urban planning, strategic design

Procedia PDF Downloads 20
2740 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence

Authors: K. N. Kiran, S. Anish

Abstract:

It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.

Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow

Procedia PDF Downloads 322
2739 The Effect of Melatonin on Acute Liver Injury: Implication to Shift Work Related Sleep Deprivation

Authors: Bing-Fang Lee, Srinivasan Periasamy, Ming-Yie Liu

Abstract:

Shift work sleep disorder is a common problem in industrialized world. It is a type of circadian rhythmic sleep disorders characterized by insomnia and sleep deprivation. Lack of sleep in workers may lead to poor health conditions such as hepatic dysfunction. Melatonin is a hormone secreted by the pineal gland to alleviate insomnia. Moreover, it is a powerful antioxidant and may prevent acute liver injury. Therefore, workers take in melatonin to deal with sleep-related health is an important issue. The aim of this study was to investigate the effect of melatonin on an acute hepatic injury model sinusoidal obstruction syndrome (SOS) in mice. Male C57BL/6 mice were injected with a single dose (500 mg/kg) of monocrotaline (MCT) to induce SOS. Melatonin (1, 3, 10 and 30 mg/kg) was injected 1 h before MCT treatment. After 24 h of MCT treatment, mice were sacrificed. The blood and liver were collected. Organ damage was evaluated by serum biochemistry, hematology analyzer, and histological examination. Low doses of melatonin (1 and 3 mg/kg) had no protective effect on SOS. However, high doses (10 and 30 mg/kg) exacerbated SOS. In addition, it not only increased serum glutamate oxaloacetate transaminase (GOT), glutamate pyruvate transaminase (GPT) and extended liver damage indicated by histological examination but also decreased platelet levels, lymphocyte ratio, and glutathione level; it had no effect on malondialdehyde and nitric oxide level in SOS mice. To conclude, melatonin may exacerbate MCT-induced SOS in mice. Furthermore, melatonin might have a synergistic action with SOS. Usage of melatonin for insomnia by people working in long shift must be cautioned; it might cause acute hepatic injury.

Keywords: acute liver injury, melatonin, shift work, sleep deprivation

Procedia PDF Downloads 167
2738 Satellite Solutions for Koshi Floods

Authors: Sujan Tyata, Alison Shilpakar, Nayan Bakhadyo, Kushal K. C., Abhas Maskey

Abstract:

The Koshi River, acknowledged as the "Sorrow of Bihar," poses intricate challenges characterized by recurrent flooding. Within the Koshi Basin, floods have historically inflicted damage on infrastructure, agriculture, and settlements. The Koshi River exhibits a highly braided pattern across a 48 km stretch to the south of Chatara. The devastating flood from the Koshi River, which began in Nepal's Sunsari District in 2008, led to significant casualties and the destruction of agricultural areas.The catastrophe was exacerbated by a levee breach, underscoring the vulnerability of the region's flood defenses. A comprehensive understanding of environmental changes in the area is unveiled through satellite imagery analysis. This analysis facilitates the identification of high-risk zones and their contributing factors. Employing remote sensing, the analysis specifically pinpoints locations vulnerable to levee breaches. Topographical features of the area along with longitudinal and cross sectional profiles of the river and levee obtained from digital elevation model are used in the hydrological analysis for assessment of flood. To mitigate the impact of floods, the strategy involves the establishment of reservoirs upstream. Leveraging satellite data, optimal locations for water storage are identified. This approach presents a dual opportunity to not only alleviate flood risks but also catalyze the implementation of pumped storage hydropower initiatives. This holistic approach addresses environmental challenges while championing sustainable energy solutions.

Keywords: flood mitigation, levee, remote sensing, satellite imagery analysis, sustainable energy solutions

Procedia PDF Downloads 28
2737 Water Governance Perspectives on the Urmia Lake Restoration Process: Challenges and Achievements

Authors: Jalil Salimi, Mandana Asadi, Naser Fathi

Abstract:

Urmia Lake (UL) has undergone a significant decline in water levels, resulting in severe environmental, socioeconomic, and health-related challenges. This paper examines the restoration process of UL from a water governance perspective. By applying a water governance model, the study evaluates the process based on six selected principles: stakeholder engagement, transparency and accountability, effectiveness, equitable water use, adaptation capacity, and water usage efficiency. The dominance of structural and physicalist approaches to water governance has led to a weak understanding of social and environmental issues, contributing to social crises. Urgent efforts are required to address the water crisis and reform water governance in the country, making water-related issues a top national priority. The UL restoration process has achieved significant milestones, including stakeholder consensus, scientific and participatory planning, environmental vision, intergenerational justice considerations, improved institutional environment for NGOs, investments in water infrastructure, transparency promotion, environmental effectiveness, and local issue resolutions. However, challenges remain, such as power distribution imbalances, bureaucratic administration, weak conflict resolution mechanisms, financial constraints, accountability issues, limited attention to social concerns, overreliance on structural solutions, legislative shortcomings, program inflexibility, and uncertainty management weaknesses. Addressing these weaknesses and challenges is crucial for the successful restoration and sustainable governance of UL.

Keywords: evaluation, restoration process, Urmia Lake, water governance, water resource management

Procedia PDF Downloads 41