Search results for: action based method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 41058

Search results for: action based method

2088 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine

Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.

Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence

Procedia PDF Downloads 212
2087 Functional Impairment in South African Children with ADHD: Design, Implementation and Evaluation of a Targeted Intervention

Authors: Mareli Fischer, Kevin G. F. Thomas

Abstract:

Although Attention-Deficit/Hyperactivity Disorder (ADHD) is one of the most prevalent childhood neurobehavioural disorders, little empirical research has been published on its clinical presentation in Africa, and, globally, few studies evaluate ADHD intervention programs that emphasize parent training. Hence, Stage 1 of this research programme aimed to describe the functional impairment of South African children with ADHD, and also sought to investigate the influence of sociodemographic variables (e.g., sex, age, socioeconomic status, family environment) and clinical variables (e.g., ADHD subtype and comorbidity) on the degree of that impairment. We used the Mini International Neuropsychiatric Interview for Children and Adolescents as a diagnostic tool, and the Child Behavior Checklist, the Strengths and Difficulties Questionnaire, and the Impairment Rating Scale as measures of functional impairment. Results from this stage of the research indicated that South African children and adolescents who meet diagnostic criteria for ADHD experience most functional impairment in the school domain, as well as in the area of social functioning. None of the measured sociodemographic variables had a significant detrimental or protective effect on how ADHD symptoms impacted on functioning. In terms of comorbidity, the presence of Major Depressive Disorder, Conduct Disorder, and Oppositional Defiant Disorder were all associated with significantly impaired overall functioning. Stage 2 of the research programme aimed to design, implement, and evaluate a child-specific intervention that targeted the primary areas of impairment identified in Stage 1. Existing literature suggests that a positive parent-training programme, in the group format, is one of the best options for cost-effective and successful ADHD intervention. Hence, the intervention took that form. Parents were taught basic behaviour analysis concepts within a supportive group context. Evaluation of the intervention’s efficacy used many of the same measures as in Stage 1, but also featured semi-structured interviews with participants and naturalistic observation of parent-child interaction. We will discuss preliminary results of that evaluation. Studying functional impairment and designing intervention plans in this way will pave the way for evidence-based treatment plans for children and adolescents diagnosed with ADHD.

Keywords: attention deficit/hyperactivity disorder, children, intervention, parenting groups

Procedia PDF Downloads 431
2086 The Impact of Dog-Assisted Wellbeing Intervention on Student Motivation and Affective Engagement in the Primary and Secondary School Setting

Authors: Yvonne Howard

Abstract:

This project currently under development is centered around current learning processes, including a thorough literature review and ongoing practical experiences gained as a deputy head in a school. These daily experiences with students engaging in animal-assisted interventions and the school therapy dog form a strong base for this research. The primary objective of this research is to comprehensively explore the impact of dog-assisted well-being interventions on student motivation and affective engagement within primary and secondary school settings. The educational domain currently encounters a significant challenge due to the lack of substantial research in this area. Despite the perceived positive outcomes of such interventions being acknowledged and shared in various settings, the evidence supporting their effectiveness in an educational context remains limited. This study aims to bridge the gap in the research and shed light on the potential benefits of dog-assisted well-being interventions in promoting student motivation and affective engagement. The significance of this topic recognizes that education is not solely confined to academic achievement but encompasses the overall well-being and emotional development of students. Over recent years, there has been a growing interest in animal-assisted interventions, particularly in healthcare settings. This interest has extended to the educational context. While the effectiveness of these interventions in these areas has been explored in other fields, the educational sector lacks comprehensive research in this regard. Through a systematic and thorough research methodology, this study seeks to contribute valuable empirical data to the field, providing evidence to support informed decision-making regarding the implementation of dog-assisted well-being interventions in schools. This research will utilize a mixed-methods design, combining qualitative and quantitative measures to assess the research objectives. The quantitative phase will include surveys and standardized scales to measure student motivation and affective engagement, while the qualitative phase will involve interviews and observations to gain in-depth insights from students, teachers, and other stakeholders. The findings will contribute evidence-based insights, best practices, and practical guidelines for schools seeking to incorporate dog-assisted interventions, ultimately enhancing student well-being and improving educational outcomes.

Keywords: therapy dog, wellbeing, engagement, motivation, AAI, intervention, school

Procedia PDF Downloads 78
2085 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)

Authors: Ahmed E. Hodaib, Mohamed A. Hashem

Abstract:

In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.

Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization

Procedia PDF Downloads 256
2084 The Effectiveness of Blended Learning in Pre-Registration Nurse Education: A Mixed Methods Systematic Review and Met Analysis

Authors: Albert Amagyei, Julia Carroll, Amanda R. Amorim Adegboye, Laura Strumidlo, Rosie Kneafsey

Abstract:

Introduction: Classroom-based learning has persisted as the mainstream model of pre-registration nurse education. This model is often rigid, teacher-centered, and unable to support active learning and the practical learning needs of nursing students. Health Education England (HEE), a public body of the Department of Health and Social Care, hypothesises that blended learning (BL) programmes may address health system and nursing profession challenges, such as nursing shortages and lack of digital expertise, by exploring opportunities for providing predominantly online, remote-access study which may increase nursing student recruitment, offering alternate pathways to nursing other than the traditional classroom route. This study will provide evidence for blended learning strategies adopted in nursing education as well as examine nursing student learning experiences concerning the challenges and opportunities related to using blended learning within nursing education. Objective: This review will explore the challenges and opportunities of BL within pre-registration nurse education from the student's perspective. Methods: The search was completed within five databases. Eligible studies were appraised independently by four reviewers. The JBI-convergent segregated approach for mixed methods review was used to assess and synthesize the data. The study’s protocol has been registered with the International Register of Systematic Reviews with registration number// PROSPERO (CRD42023423532). Results: Twenty-seven (27) studies (21 quantitative and 6 qualitative) were included in the review. The study confirmed that BL positively impacts nursing students' learning outcomes, as demonstrated by the findings of the meta-analysis and meta-synthesis. Conclusion: The review compared BL to traditional learning, simulation, laboratory, and online learning on nursing students’ learning and programme outcomes as well as learning behaviour and experience. The results show that BL could effectively improve nursing students’ knowledge, academic achievement, critical skills, and clinical performance as well as enhance learner satisfaction and programme retention. The review findings outline that students’ background characteristics, BL design, and format significantly impact the success of the BL nursing programme.

Keywords: nursing student, blended learning, pre-registration nurse education, online learning

Procedia PDF Downloads 50
2083 The Anesthesia Considerations in Robotic Mastectomies

Authors: Amrit Vasdev, Edwin Rho, Gurinder Vasdev

Abstract:

Robotic surgery has enabled a new spectrum of minimally invasive breast reconstruction by improving visualization, surgeon posturing, and improved patient outcomes.1 The DaVinci robot system can be utilized in nipple sparing mastectomies and reconstructions. The process involves the insufflation of the subglandular space and a dissection of the mammary gland with a combination of cautery and blunt dissection. This case outlines a 35-year-old woman who has a long-standing family history of breast cancer and a diagnosis of a deleterious BRCA2 genetic mutation. She has decided to proceed with bilateral nipple sparing mastectomies with implants. Her perioperative mammogram and MRI were negative for masses, however, her left internal mammary lymph node was enlarged. She has taken oral contraceptive pills for 3-5 years and denies DES exposure, radiation therapy, human replacement therapy, or prior breast surgery. She does not smoke and rarely consumes alcohol. During the procedure, the patient received a standardized anesthetic for out-patient surgery of propofol infusion, succinylcholine, sevoflurane, and fentanyl. Aprepitant was given as an antiemetic and preoperative Tylenol and gabapentin for pain management. Concerns for the patient during the procedure included CO2 insufflation into the subcutaneous space. With CO2 insufflation, there is a potential for rapid uptake leading to severe acidosis, embolism, and subcutaneous emphysema.2To mitigate this, it is important to hyperventilate the patient and reduce both the insufflation pressure and the CO2 flow rate to the minimal acceptable by the surgeon. For intraoperative monitoring during this 6-9 hour long procedure, it has been suggested to utilize an Arterial-Line for end-tidal CO2 monitoring. However, in this case, it was not necessary as the patient had excellent cardiovascular reserve, and end-tidal CO2 was within normal limits for the duration of the procedure. A BIS monitor was also utilized to reduce anesthesia burden and to facilitate a prompt discharge from the PACU. Minimal Invasive Robotic Surgery will continue to evolve, and anesthesiologists need to be prepared for the new challenges ahead. Based on our limit number of patients, robotic mastectomy appears to be a safe alternative to open surgery with the promise of clearer tissue demarcation and better cosmetic results.

Keywords: anesthesia, mastectomies, robotic, hypercarbia

Procedia PDF Downloads 112
2082 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis

Authors: Michael Lognoul

Abstract:

In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.

Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements

Procedia PDF Downloads 317
2081 Using Photogrammetric Techniques to Map the Mars Surface

Authors: Ahmed Elaksher, Islam Omar

Abstract:

For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.

Keywords: mars, photogrammetry, MOLA, HiRISE

Procedia PDF Downloads 57
2080 Economics of Precision Mechanization in Wine and Table Grape Production

Authors: Dean A. McCorkle, Ed W. Hellman, Rebekka M. Dudensing, Dan D. Hanselka

Abstract:

The motivation for this study centers on the labor- and cost-intensive nature of wine and table grape production in the U.S., and the potential opportunities for precision mechanization using robotics to augment those production tasks that are labor-intensive. The objectives of this study are to evaluate the economic viability of grape production in five U.S. states under current operating conditions, identify common production challenges and tasks that could be augmented with new technology, and quantify a maximum price for new technology that growers would be able to pay. Wine and table grape production is primed for precision mechanization technology as it faces a variety of production and labor issues. Methodology: Using a grower panel process, this project includes the development of a representative wine grape vineyard in five states and a representative table grape vineyard in California. The panels provided production, budget, and financial-related information that are typical for vineyards in their area. Labor costs for various production tasks are of particular interest. Using the data from the representative budget, 10-year projected financial statements have been developed for the representative vineyard and evaluated using a stochastic simulation model approach. Labor costs for selected vineyard production tasks were evaluated for the potential of new precision mechanization technology being developed. These tasks were selected based on a variety of factors, including input from the panel members, and the extent to which the development of new technology was deemed to be feasible. The net present value (NPV) of the labor cost over seven years for each production task was derived. This allowed for the calculation of a maximum price for new technology whereby the NPV of labor costs would equal the NPV of purchasing, owning, and operating new technology. Expected Results: The results from the stochastic model will show the projected financial health of each representative vineyard over the 2015-2024 timeframe. Investigators have developed a preliminary list of production tasks that have the potential for precision mechanization. For each task, the labor requirements, labor costs, and the maximum price for new technology will be presented and discussed. Together, these results will allow technology developers to focus and prioritize their research and development efforts for wine and table grape vineyards, and suggest opportunities to strengthen vineyard profitability and long-term viability using precision mechanization.

Keywords: net present value, robotic technology, stochastic simulation, wine and table grapes

Procedia PDF Downloads 260
2079 Comparative Effect of Microbial Phytase Supplementation on Layer Chickens Fed Diets with Required or Low Phosphorous Level

Authors: Hamada Ahmed, Mervat A. Abdel-Latif, Alaa. A. Ghoraba, Samah A. Ganna

Abstract:

An experiment was conducted to determine the effect of microbial phytase (Quantum Blue®) supplementation on layer chickens fed diets with required or low phosphorous level in corn-soybean based diets. One hundred and sixteen 23-week-old Lohman brown laying hens were used in 8-week feeding trial. Hens were randomly allotted into four treatments where the group (1) (control group) was fed basal diet without phytase, group (2) fed basal diet supplemented with phytase, group (3) fed diet supplemented with phytase as a replacement of 25% of monocalcium phosphate and group (4) fed diet supplemented with phytase as a replacement of 50% of monocalcium phosphate. Records on daily egg production, egg mass, egg weight and body weight of hens at the end of experimental period were recorded. Results revealed no significant (p ≥ 0.05) differences were observed among the other dietary treatments in BW, egg production, egg mass, feed intake or feed conversion when these parameters were evaluated over the duration of the experiment while egg weight showed significant (p < 0.05) increase in all phytase supplemented groups. There was no significant (p ≥ 0.05) differences in egg quality including egg length, egg width, egg shape index, yolk height, yolk width, yolk index, yolk weight and yolk albumin ratio while egg albumin was significantly increased (p < 0.05) in group (2) and group (3). Egg shell weight increased significantly (p < 0.05) in all phytase supplemented groups when compared with the control group also shell thickness increased significantly (p < 0.05) in both group (2 &3). No significant (P ≥ 0.05) difference was observed in serum Ca, P level while alkaline phosphatase was significantly (P ˂ 0.05) increased in group (3). Egg shell analysis showed increase in egg shell ash% in all phytase supplemented groups when compared with the control group, egg shell calcium % was higher in group (3) and group (4) than the control group while group (2) showed lower egg shell calcium% than the other experimental groups, egg shell phosphorous% was higher in all phytase supplemented groups than the control group. Phosphorous digestability was significantly (P ˂ 0.05) increased in all phytase supplemented groups than the control group and the highest p digestability was in group (4). Calcium digestability showed significant (P ˂ 0.05) increase in all phytase supplemented groups when compared with the control group and the highest digetability was in group (4).

Keywords: layers, microbial phytase, Ca and P availability, egg production, egg characteristics

Procedia PDF Downloads 187
2078 Compromising Quality of Life in Low Income Settlemnt’s: The Case of Ashrayan Prakalpa Prakalpa, Khulna

Authors: Salma Akter, Md. Kamal Uddin

Abstract:

Ashrayan (shelter) Prakalpa – a fully subsidized ‘integrated poverty eradication program’ through the provisioning of shelter of Bangladesh Government (GoB) targeting the internally displaced and homeless. In spite of the inclusiveness (poverty alleviation, employment opportunity, Tenure ship and training) of the shelter policy, dwellers are not merely questioned by the issue of 'the quality of life' .This study demonstrates how top-down policies, ambiguous ownership status of land and dwelling environments lead to ‘everyday compromise’ by the grassroots in both subjective (satisfaction, comfort and safety) and objective (physical design elements and physical environmental elements) issues in three respective scale macro (neighborhood) meso (shelter /built environment) and micro(family). It shows that by becoming subject to Government’s resettlements policies and after becoming user of its shelter units (although locally known as ‘barracks’ rather shelter or housing), the once displaced settlers assume a curious form of spatial practice where both social and spatial often bear slippery meanings. Thus, Policy-based shelter force the dwellers frequently compromise with their provided built environments and spaces within the settlements both in overtly and covertly. Compromises are made during the production of space and forms, whereas interesting new spaces and space-making practices emerge. The settlements under study are Dakshin Chandani Mahal Ashrayan Prakalpa located at the Eastern fringe area of Khulna, Bangladesh. In terms of methodology, this research is primarily exploratory and assumes a qualitative approach. Key tools used to obtain information are policy analysis, literature review, key informant interview, focus group discussion and participant observation at the level of dwelling and settlements. Necessary drawings and photographs have been taken to promote the study objective. Findings revealed that various shortages, inadequacies and negligence of policymakers make a compromising character of displaced by the means of 'quality of life' both in objective and subjective ground. Thus the study ends up with a recommendation to the policymakers to take an initiative to ensure the quality of life of the dwellers.

Keywords: Ashrayan, compromise, displaced people, quality of life

Procedia PDF Downloads 338
2077 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets

Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah

Abstract:

Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.

Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs

Procedia PDF Downloads 488
2076 Monolithic Integrated GaN Resonant Tunneling Diode Pair with Picosecond Switching Time for High-speed Multiple-valued Logic System

Authors: Fang Liu, JiaJia Yao, GuanLin Wu, ZuMaoLi, XueYan Yang, HePeng Zhang, ZhiPeng Sun, JunShuai Xue

Abstract:

The explosive increasing needs of data processing and information storage strongly drive the advancement of the binary logic system to multiple-valued logic system. Inherent negative differential resistance characteristic, ultra-high-speed switching time, and robust anti-irradiation capability make III-nitride resonant tunneling diode one of the most promising candidates for multi-valued logic devices. Here we report the monolithic integration of GaN resonant tunneling diodes in series to realize multiple negative differential resistance regions, obtaining at least three stable operating states. A multiply-by-three circuit is achieved by this combination, increasing the frequency of the input triangular wave from f0 to 3f0. The resonant tunneling diodes are grown by plasma-assistedmolecular beam epitaxy on free-standing c-plane GaN substrates, comprising double barriers and a single quantum well both at the atomic level. Device with a peak current density of 183kA/cm² in conjunction with a peak-to-valley current ratio (PVCR) of 2.07 is observed, which is the best result reported in nitride-based resonant tunneling diodes. Microwave oscillation event at room temperature was discovered with a fundamental frequency of 0.31GHz and an output power of 5.37μW, verifying the high repeatability and robustness of our device. The switching behavior measurement was successfully carried out, featuring rise and fall times in the order of picoseconds, which can be used in high-speed digital circuits. Limited by the measuring equipment and the layer structure, the switching time can be further improved. In general, this article presents a novel nitride device with multiple negative differential regions driven by the resonant tunneling mechanism, which can be used in high-speed multiple value logic field with reduced circuit complexity, demonstrating a new solution of nitride devices to break through the limitations of binary logic.

Keywords: GaN resonant tunneling diode, negative differential resistance, multiple-valued logic system, switching time, peak-to-valley current ratio

Procedia PDF Downloads 100
2075 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data

Authors: Minjuan Sun

Abstract:

Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.

Keywords: credit score, digital footprint, Fintech, machine learning

Procedia PDF Downloads 161
2074 Description of a Structural Health Monitoring and Control System Using Open Building Information Modeling

Authors: Wahhaj Ahmed Farooqi, Bilal Ahmad, Sandra Maritza Zambrano Bernal

Abstract:

In view of structural engineering, monitoring of structural responses over time is of great importance with respect to recent developments of construction technologies. Recently, developments of advanced computing tools have enabled researcher’s better execution of structural health monitoring (SHM) and control systems. In the last decade, building information modeling (BIM) has substantially enhanced the workflow of planning and operating engineering structures. Typically, building information can be stored and exchanged via model files that are based on the Industry Foundation Classes (IFC) standard. In this study a modeling approach for semantic modeling of SHM and control systems is integrated into the BIM methodology using the IFC standard. For validation of the modeling approach, a laboratory test structure, a four-story shear frame structure, is modeled using a conventional BIM software tool. An IFC schema extension is applied to describe information related to monitoring and control of a prototype SHM and control system installed on the laboratory test structure. The SHM and control system is described by a semantic model applying Unified Modeling Language (UML). Subsequently, the semantic model is mapped into the IFC schema. The test structure is composed of four aluminum slabs and plate-to-column connections are fully fixed. In the center of the top story, semi-active tuned liquid column damper (TLCD) is installed. The TLCD is used to reduce effects of structural responses in context of dynamic vibration and displacement. The wireless prototype SHM and control system is composed of wireless sensor nodes. For testing the SHM and control system, acceleration response is automatically recorded by the sensor nodes equipped with accelerometers and analyzed using embedded computing. As a result, SHM and control systems can be described within open BIM, dynamic responses and information of damages can be stored, documented, and exchanged on the formal basis of the IFC standard.

Keywords: structural health monitoring, open building information modeling, industry foundation classes, unified modeling language, semi-active tuned liquid column damper, nondestructive testing

Procedia PDF Downloads 151
2073 Synthetic Method of Contextual Knowledge Extraction

Authors: Olga Kononova, Sergey Lyapin

Abstract:

Global information society requirements are transparency and reliability of data, as well as ability to manage information resources independently; particularly to search, to analyze, to evaluate information, thereby obtaining new expertise. Moreover, it is satisfying the society information needs that increases the efficiency of the enterprise management and public administration. The study of structurally organized thematic and semantic contexts of different types, automatically extracted from unstructured data, is one of the important tasks for the application of information technologies in education, science, culture, governance and business. The objectives of this study are the contextual knowledge typologization, selection or creation of effective tools for extracting and analyzing contextual knowledge. Explication of various kinds and forms of the contextual knowledge involves the development and use full-text search information systems. For the implementation purposes, the authors use an e-library 'Humanitariana' services such as the contextual search, different types of queries (paragraph-oriented query, frequency-ranked query), automatic extraction of knowledge from the scientific texts. The multifunctional e-library «Humanitariana» is realized in the Internet-architecture in WWS-configuration (Web-browser / Web-server / SQL-server). Advantage of use 'Humanitariana' is in the possibility of combining the resources of several organizations. Scholars and research groups may work in a local network mode and in distributed IT environments with ability to appeal to resources of any participating organizations servers. Paper discusses some specific cases of the contextual knowledge explication with the use of the e-library services and focuses on possibilities of new types of the contextual knowledge. Experimental research base are science texts about 'e-government' and 'computer games'. An analysis of the subject-themed texts trends allowed to propose the content analysis methodology, that combines a full-text search with automatic construction of 'terminogramma' and expert analysis of the selected contexts. 'Terminogramma' is made out as a table that contains a column with a frequency-ranked list of words (nouns), as well as columns with an indication of the absolute frequency (number) and the relative frequency of occurrence of the word (in %% ppm). The analysis of 'e-government' materials showed, that the state takes a dominant position in the processes of the electronic interaction between the authorities and society in modern Russia. The media credited the main role in these processes to the government, which provided public services through specialized portals. Factor analysis revealed two factors statistically describing the used terms: human interaction (the user) and the state (government, processes organizer); interaction management (public officer, processes performer) and technology (infrastructure). Isolation of these factors will lead to changes in the model of electronic interaction between government and society. In this study, the dominant social problems and the prevalence of different categories of subjects of computer gaming in science papers from 2005 to 2015 were identified. Therefore, there is an evident identification of several types of contextual knowledge: micro context; macro context; dynamic context; thematic collection of queries (interactive contextual knowledge expanding a composition of e-library information resources); multimodal context (functional integration of iconographic and full-text resources through hybrid quasi-semantic algorithm of search). Further studies can be pursued both in terms of expanding the resource base on which they are held, and in terms of the development of appropriate tools.

Keywords: contextual knowledge, contextual search, e-library services, frequency-ranked query, paragraph-oriented query, technologies of the contextual knowledge extraction

Procedia PDF Downloads 359
2072 The Effect of Articial Intelligence on Physical Education Analysis and Sports Science

Authors: Peter Adly Hamdy Fahmy

Abstract:

The aim of the study was to examine the effects of a physical education program on student learning by combining the teaching of personal and social responsibility (TPSR) with a physical education model and TPSR with a traditional teaching model, these learning outcomes involving self-learning. -Study. Athletic performance, enthusiasm for sport, group cohesion, sense of responsibility and game performance. The participants were 3 secondary school physical education teachers and 6 physical education classes, 133 participants with students from the experimental group with 75 students and the control group with 58 students, and each teacher taught the experimental group and the control group for 16 weeks. The research methods used surveys, interviews and focus group meetings. Research instruments included the Personal and Social Responsibility Questionnaire, Sports Enthusiasm Scale, Group Cohesion Scale, Sports Self-Efficacy Scale, and Game Performance Assessment Tool. Multivariate analyzes of covariance and repeated measures ANOVA were used to examine differences in student learning outcomes between combining the TPSR with a physical education model and the TPSR with a traditional teaching model. The research findings are as follows: 1) The TPSR sports education model can improve students' learning outcomes, including sports self-efficacy, game performance, sports enthusiasm, team cohesion, group awareness and responsibility. 2) A traditional teaching model with TPSR could improve student learning outcomes, including sports self-efficacy, responsibility, and game performance. 3) The sports education model with TPSR could improve learning outcomes more than the traditional teaching model with TPSR, including sports self-efficacy, sports enthusiasm, responsibility and game performance. 4) Based on qualitative data on teachers' and students' learning experience, the physical education model with TPSR significantly improves learning motivation, group interaction and sense of play. The results suggest that physical education with TPSR could further improve learning outcomes in the physical education program. On the other hand, the hybrid model curriculum projects TPSR - Physical Education and TPSR - Traditional Education are good curriculum projects for moral character education that can be used in school physics.

Keywords: approach competencies, physical, education, teachers employment, graduate, physical education and sport sciences, SWOT analysis character education, sport season, game performance, sport competence

Procedia PDF Downloads 59
2071 An Empirical Study for the Data-Driven Digital Transformation of the Indian Telecommunication Service Providers

Authors: S. Jigna, K. Nanda Kumar, T. Anna

Abstract:

Being a major contributor to the Indian economy and a critical facilitator for the country’s digital India vision, the Indian telecommunications industry is also a major source of employment for the country. Since the last few years, the Indian telecommunication service providers (TSPs), however, are facing business challenges related to increasing competition, losses, debts, and decreasing revenue. The strategic use of digital technologies for a successful digital transformation has the potential to equip organizations to meet these business challenges. Despite an increased focus on digital transformation, the telecom service providers globally, including Indian TSPs, have seen limited success so far. The purpose of this research was thus to identify the factors that are critical for the digital transformation and to what extent they influence the successful digital transformation of the Indian TSPs. The literature review of more than 300 digital transformation-related articles, mostly from 2013-2019, demonstrated a lack of an empirical model consisting of factors for the successful digital transformation of the TSPs. This study theorizes a research framework grounded in multiple theories, and a research model consisting of 7 constructs that may be influencing business success during the digital transformation of the organization was proposed. The questionnaire survey of senior managers in the Indian telecommunications industry was seeking to validate the research model. Based on 294 survey responses, the validation of the Structural equation model using the statistical tool ADANCO 2.1.1 was found to be robust. Results indicate that Digital Capabilities, Digital Strategy, and Corporate Level Data Strategy in that order has a strong influence on the successful Business Performance, followed by IT Function Transformation, Digital Innovation, and Transformation Management respectively. Even though Digital Organization did not have a direct significance on Business Performance outcomes, it had a strong influence on IT Function Transformation, thus affecting the Business Performance outcomes indirectly. Amongst numerous practical and theoretical contributions of the study, the main contribution for the Indian TSPs is a validated reference for prioritizing the transformation initiatives in their strategic roadmap. Also, the main contribution to the theory is the possibility to use the research framework artifact of the present research for quantitative validation in different industries and geographies.

Keywords: corporate level data strategy, digital capabilities, digital innovation, digital strategy

Procedia PDF Downloads 129
2070 Cultural Collisions, Ethics and HIV: On Local Values in a Globalized Medical World

Authors: Norbert W. Paul

Abstract:

In 1988, parts of the scientific community still heralded findings to support that AIDS was likely to remain largely a ‘gay disease’. The value-ladden terminology of some of the articles suggested that rectum and fragile urethra are not sufficiently robust to provide a barrier against infectious fluids, especially body fluids contaminated with HIV while the female vagina, would provide natural protection against injuries and trauma facilitating HIV-infection. Anal sexual intercourse was constituted not only as dangerous but also as unnatural practice, while penile-vaginal intercourse would follow natural design and thus be relatively safe practice minimizing the risk of HIV. Statements like the latter were not uncommon in the early times of HIV/AIDS and contributed to captious certainties and an underestimation of heterosexual risks. Pseudo-scientific discourses on the origin of HIV were linked to local and global health politics in the 1980ies. The pathways of infection were related to normative concepts like deviant, subcultural behavior, cultural otherness, and guilt used to target, tag and separate specific groups at risk from the ‘normal’ population. Controlling populations at risk became the top item on the agenda rather than controlling modes of transmission and the virus. Hence, the Thai strategy to cope with HIV/AIDS by acknowledging social and sexual practices as they were – not as they were imagined – has become a role model for successful prevention in the highly scandalized realm of sexually transmitted disease. By accepting the globalized character of local HIV-risk and projecting the risk onto populations which are neither particularly vocal groups nor vested with the means to strive for health and justice Thailand managed to culturally implement knowledge-based tools of prevention. This paper argues, that pertinent cultural collisions regarding our strategies to cope with HIV/AIDS are deeply rooted in misconceptions, misreadings and scandalizations brought about in the early history of HIV in the 1980ties. The Thai strategy is used to demonstrate how local values can be balanced against globalized health risk and used to effectuated prevention by which knowledge and norms are translated into local practices. Issues of global health and injustice will be addressed in the final part of the paper dealing with the achievability of health as a human right.

Keywords: bioethics, HIV, global health, justice

Procedia PDF Downloads 261
2069 Towards Modern Approaches of Intelligence Measurement for Clinical and Educational Practices

Authors: Alena Kulikova, Tatjana Kanonire

Abstract:

Intelligence research is one of the oldest fields of psychology. Many factors have made a research on intelligence, defined as reasoning and problem solving [1, 2], a very acute and urgent problem. Thus, it has been repeatedly shown that intelligence is a predictor of academic, professional, and social achievement in adulthood (for example, [3]); Moreover, intelligence predicts these achievements better than any other trait or ability [4]. The individual level, a comprehensive assessment of intelligence is a necessary criterion for the diagnosis of various mental conditions. For example, it is a necessary condition for psychological, medical and pedagogical commissions when deciding on educational needs and the most appropriate educational programs for school children. Assessment of intelligence is crucial in clinical psychodiagnostic and needs high-quality intelligence measurement tools. Therefore, it is not surprising that the development of intelligence tests is an essential part of psychological science and practice. Many modern intelligence tests have a long history and have been used for decades, for example, the Stanford-Binet test or the Wechsler test. However, the vast majority of these tests are based on the classic linear test structure, in which all respondents receive all tasks (see, for example, a critical review by [5]). This understanding of the testing procedure is a legacy of the pre-computer era, in which blank testing was the only diagnostic procedure available [6] and has some significant limitations that affect the reliability of the data obtained [7] and increased time costs. Another problem with measuring IQ is that classical line-structured tests do not fully allow to measure respondent's intellectual progress [8], which is undoubtedly a critical limitation. Advances in modern psychometrics allow for avoiding the limitations of existing tools. However, as in any rapidly developing industry, at the moment, psychometrics does not offer ready-made and straightforward solutions and requires additional research. In our presentation we would like to discuss the strengths and weaknesses of the current approaches to intelligence measurement and highlight “points of growth” for creating a test in accordance with modern psychometrics. Whether it is possible to create the instrument that will use all achievements of modern psychometric and remain valid and practically oriented. What would be the possible limitations for such an instrument? The theoretical framework and study design to create and validate the original Russian comprehensive computer test for measuring the intellectual development in school-age children will be presented.

Keywords: Intelligence, psychometrics, psychological measurement, computerized adaptive testing, multistage testing

Procedia PDF Downloads 80
2068 Arc Interruption Design for DC High Current/Low SC Fuses via Simulation

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This report summarizes a simulation-based approach to estimate the current interruption behavior of a fuse element utilized in a DC network protecting battery banks under different stresses. Due to internal resistance of the battries, the short circuit current in very close to the nominal current, and it makes the fuse designation tricky. The base configuration considered in this report consists of five fuse units in parallel. The simulations are performed using a multi-physics software package, COMSOL® 5.6, and the necessary material parameters have been calculated using two other software packages.The first phase of the simulation starts with the heating of the fuse elements resulted from the current flow through the fusing element. In this phase, the heat transfer between the metallic strip and the adjacent materials results in melting and evaporation of the filler and housing before the aluminum strip is evaporated and the current flow in the evaporated strip is cut-off, or an arc is eventually initiated. The initiated arc starts to expand, so the entire metallic strip is ablated, and a long arc of around 20 mm is created within the first 3 milliseconds after arc initiation (v_elongation = 6.6 m/s. The final stage of the simulation is related to the arc simulation and its interaction with the external circuitry. Because of the strong ablation of the filler material and venting of the arc caused by the melting and evaporation of the filler and housing before an arc initiates, the arc is assumed to burn in almost pure ablated material. To be able to precisely model this arc, one more step related to the derivation of the transport coefficients of the plasma in ablated urethane was necessary. The results indicate that an arc current interruption, in this case, will not be achieved within the first tens of milliseconds. In a further study, considering two series elements, the arc was interrupted within few milliseconds. A very important aspect in this context is the potential impact of many broken strips parallel to the one where the arc occurs. The generated arcing voltage is also applied to the other broken strips connected in parallel with arcing path. As the gap between the other strips is very small, a large voltage of a few hundred volts generated during the current interruption may eventually lead to a breakdown of another gap. As two arcs in parallel are not stable, one of the arcs will extinguish, and the total current will be carried by one single arc again. This process may be repeated several times if the generated voltage is very large. The ultimate result would be that the current interruption may be delayed.

Keywords: DC network, high current / low SC fuses, FEM simulation, paralle fuses

Procedia PDF Downloads 66
2067 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 406
2066 Gilgel Gibe III: Dam-Induced Displacement in Ethiopia and Kenya

Authors: Jonny Beirne

Abstract:

Hydropower developments have come to assume an important role within the Ethiopian government's overall development strategy for the country during the last ten years. The Gilgel Gibe III on the Omo river, due to become operational in September 2014, represents the most ambitious, and controversial, of these projects to date. Further aspects of the government's national development strategy include leasing vast areas of designated 'unused' land for large-scale commercial agricultural projects and 'voluntarily' villagizing scattered, semi-nomadic agro-pastoralist groups to centralized settlements so as to use land and water more efficiently and to better provide essential social services such as education and healthcare. The Lower Omo valley, along the Omo River, is one of the sites of this villagization programme as well as of these large-scale commercial agricultural projects which are made possible owing to the regulation of the river's flow by Gibe III. Though the Ethiopian government cite many positive aspects of these agricultural and hydropower developments there are still expected to be serious regional and transnational effects, including on migration flows, in an area already characterized by increasing climatic vulnerability with attendant population movements and conflicts over scarce resources. The following paper is an attempt to track actual and anticipated migration flows resulting from the construction of Gibe III in the immediate vicinity of the dam, downstream in the Lower Omo Valley and across the border in Kenya around Lake Turkana. In the case of those displaced in the Lower Omo Valley, this will be considered in view of the distinction between voluntary villagization and forced resettlement. The research presented is not primary-source material. Instead, it is drawn from the reports and assessments of the Ethiopian government, rights-based groups, and academic researchers as well as media articles. It is hoped that this will serve to draw greater attention to the issue and encourage further methodological research on the dynamics of dam constructions (and associated large-scale irrigation schemes) on migration flows and on the ultimate experience of displacement and resettlement for environmental migrants in the region.

Keywords: forced displacement, voluntary resettlement, migration, human rights, human security, land grabs, dams, commercial agriculture, pastoralism, ecosystem modification, natural resource conflict, livelihoods, development

Procedia PDF Downloads 381
2065 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence

Authors: Meghan E. Murray, Carri R. Tolmie

Abstract:

This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.

Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership

Procedia PDF Downloads 127
2064 Effects of Concomitant Use of Metformin and Powdered Moringa Oleifera Leaves on Glucose Tolerance in Sprague-Dawley Rats

Authors: Emielex M. Aguilar, Kristen Angela G. Cruz, Czarina Joie L. Rivera, Francis Dave C. Tan, Gavino Ivan N. Tanodra, Dianne Katrina G. Usana, Mary Grace T. Valentin, Nico Albert S. Vasquez, Edwin Monico C. Wee

Abstract:

The risk of diabetes mellitus is increasing in the Philippines, with Metformin and Insulin as drugs commonly used for its management. The use of herbal medicines has grown increasingly, especially among the elderly population. Moringa oleifera or malunggay is one of the most common plants in the country, and several studies have shown the plant to exhibit a hypoglycemic property with its flavonoid content. This study aims to investigate the possible effects of concomitant use of Metformin and powdered M. oleifera leaves (PMOL) on blood glucose levels. Twenty male Sprague-Dawley rats were equally distributed into four groups. Fasting blood glucose levels of the rats were measured prior to experimentation. The following treatments were administered to the four groups, respectively: glucose only 2 g/kg; glucose 2 g/kg + Metformin 100 mg/kg; glucose 2 g/kg + PMOL 200 mg/kg; and glucose 2 g/kg + PMOL 200 mg/kg and Metformin 100 mg/kg. Blood glucose levels were determined on the 1st, 2nd, 3rd, and 4th hour post-treatment and compared between groups. Statistical analysis showed that the type of intervention did not show significance in the reduction of blood glucose levels when compared with the other groups (p=0.378), while the effect of time exhibited significance (p=0.000). The interaction between the type of intervention and time of blood glucose measurement was shown to be significant (p=0.024). Within each group, the control and PMOL-treated groups showed significant reduction in blood glucose levels over time with p-values of 0.000 and 0.000, respectively, while the Metformin-treated and the combination groups had p-values of 0.062 and 0.093, respectively, which are not significant. The descriptive data also showed that the mean total reduction of blood glucose levels of the Metformin and PMOL combination treatment group was lower than the PMOL-treated group alone, while the mean total reduction of blood glucose levels of the combination group was higher than the Metformin-treated group alone. Based on the results obtained, the combination of Metformin and PMOL did not significantly lower the blood glucose levels of the rats as compared to the other groups. However, the concomitant use of Metformin and PMOL may affect each other’s blood glucose lowering activity. Additionally, prolonged time of exposure and delay in the first blood glucose measurement after treatment could exhibit a significant effect in the blood glucose levels. Further studies are recommended regarding the effects of the concomitant use of the two agents on blood glucose levels.

Keywords: blood glucose levels, concomitant use, metformin, Moringa oleifera

Procedia PDF Downloads 413
2063 Biflavonoids from Selaginellaceae as Epidermal Growth Factor Receptor Inhibitors and Their Anticancer Properties

Authors: Adebisi Adunola Demehin, Wanlaya Thamnarak, Jaruwan Chatwichien, Chatchakorn Eurtivong, Kiattawee Choowongkomon, Somsak Ruchirawat, Nopporn Thasana

Abstract:

The epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein involved in cellular signalling processes and, its aberrant activity is crucial in the development of many cancers such as lung cancer. Selaginellaceae are fern allies that have long been used in Chinese traditional medicine to treat various cancer types, especially lung cancer. Biflavonoids, the major secondary metabolites in Selaginellaceae, have numerous pharmacological activities, including anti-cancer and anti-inflammatory. For instance, amentoflavone induces a cytotoxic effect in the human NSCLC cell line via the inhibition of PARP-1. However, to the best of our knowledge, there are no studies on biflavonoids as EGFR inhibitors. Thus, this study aims to investigate the EGFR inhibitory activities of biflavonoids isolated from Selaginella siamensis and Selaginella bryopteris. Amentoflavone, tetrahydroamentoflavone, sciadopitysin, robustaflavone, robustaflavone-4-methylether, delicaflavone, and chrysocauloflavone were isolated from the ethyl-acetate extract of the whole plants. The structures were determined using NMR spectroscopy and mass spectrometry. In vitro study was conducted to evaluate their cytotoxicity against A549, HEPG2, and T47D human cancer cell lines using the MTT assay. In addition, a target-based assay was performed to investigate their EGFR inhibitory activity using the kinase inhibition assay. Finally, a molecular docking study was conducted to predict the binding modes of the compounds. Robustaflavone-4-methylether and delicaflavone showed the best cytotoxic activity on all the cell lines with IC50 (µM) values of 18.9 ± 2.1 and 22.7 ± 3.3 on A549, respectively. Of these biflavonoids, delicaflavone showed the most potent EGFR inhibitory activity with an 84% relative inhibition at 0.02 nM using erlotinib as a positive control. Robustaflavone-4-methylether showed a 78% inhibition at 0.15 nM. The docking scores obtained from the molecular docking study correlated with the kinase inhibition assay. Robustaflavone-4-methylether and delicaflavone had a docking score of 72.0 and 86.5, respectively. The inhibitory activity of delicaflavone seemed to be linked with the C2”=C3” and 3-O-4”’ linkage pattern. Thus, this study suggests that the structural features of these compounds could serve as a basis for developing new EGFR-TK inhibitors.

Keywords: anticancer, biflavonoids, EGFR, molecular docking, Selaginellaceae

Procedia PDF Downloads 198
2062 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 375
2061 Evaluation of the Physico-Chemical and Microbial Properties of the Compost Leachate (CL) to Assess Its Role in the Bioremediation of Polyaromatic Hydrocarbons (PAHs)

Authors: Omaima A. Sharaf, Tarek A. Moussa, Said M. Badr El-Din, H. Moawad

Abstract:

Background: Polycyclic aromatic hydrocarbons (PAHs) pose great environmental and human health concerns for their widespread occurrence, persistence, and carcinogenic properties. PAHs releases due to anthropogenic activities to the wider environment have led to higher concentrations of these contaminants than would be expected from natural processes alone. This may result in a wide range of environmental problems that can accumulate in agricultural ecosystems, which threatened to become a negative impact on sustainable agricultural development. Thus, this study aimed to evaluate the physico-chemical, and microbial properties of the compost leachate (CL) to assess its role as nutrient and microbial source (biostimulation/bioaugmentation) for developing a cost-effective bioremediation technology for PAHs contaminated sites. Material and Methods: PAHs-degrading bacteria were isolated from CL that was collected from a composting site located in central Scotland, UK. Isolation was carried out by enrichment using phenanthrene (PHR), pyrene (PYR) and benzo(a)pyrene (BaP) as the sole source of carbon and energy. The isolates were characterized using a variety of phenotypic and molecular properties. Six different isolates were identified based on the difference in morphological and biochemical tests. The efficiency of these isolates in PAHs utilization was assessed. Further analysis was performed to define taxonomical status and phylogenic relation between the most potent PAHs-utilizing bacterial strains and other standard strains, using molecular approach by partial 16S rDNA gene sequence analysis. Results indicated that the 16S rDNA sequence analysis confirmed the results of biochemical identification, as both of biochemical and molecular identification of the isolates assigned them to Bacillus licheniformis, Pseudomonas aeruginosa, Alcaligenes faecalis, Serratia marcescens, Enterobacter cloacae and Providenicia which were identified as the prominent PAHs-utilizers isolated from CL. Conclusion: This study indicates that the CL samples contain a diverse population of PAHs-degrading bacteria and the use of CL may have a potential for bioremediation of PAHs contaminated sites.

Keywords: polycyclic aromatic hydrocarbons, physico-chemical analyses, compost leachate, microbial and biochemical analyses, phylogenic relations, 16S rDNA sequence analysis

Procedia PDF Downloads 263
2060 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources

Authors: Alana E. Jansen

Abstract:

With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.

Keywords: diversity, high-performing teams, job demands and resources, team effectiveness

Procedia PDF Downloads 187
2059 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin

Authors: Mohamed A. Saleem

Abstract:

Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.

Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data

Procedia PDF Downloads 113