Search results for: monitoring and modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6742

Search results for: monitoring and modeling

382 Socio-Economic and Psychological Factors of Moscow Population Deviant Behavior: Sociological and Statistical Research

Authors: V. Bezverbny

Abstract:

The actuality of the project deals with stable growing of deviant behavior’ statistics among Moscow citizens. During the recent years the socioeconomic health, wealth and life expectation of Moscow residents is regularly growing up, but the limits of crime and drug addiction have grown up seriously. Another serious Moscow problem has been economical stratification of population. The cost of identical residential areas differs at 2.5 times. The project is aimed at complex research and the development of methodology for main factors and reasons evaluation of deviant behavior growing in Moscow. The main project objective is finding out the links between the urban environment quality and dynamics of citizens’ deviant behavior in regional and municipal aspect using the statistical research methods and GIS modeling. The conducted research allowed: 1) to evaluate the dynamics of deviant behavior in Moscow different administrative districts; 2) to describe the reasons of crime increasing, drugs addiction, alcoholism, suicides tendencies among the city population; 3) to develop the city districts classification based on the level of the crime rate; 4) to create the statistical database containing the main indicators of Moscow population deviant behavior in 2010-2015 including information regarding crime level, alcoholism, drug addiction, suicides; 5) to present statistical indicators that characterize the dynamics of Moscow population deviant behavior in condition of expanding the city territory; 6) to analyze the main sociological theories and factors of deviant behavior for concretization the deviation types; 7) to consider the main theoretical statements of the city sociology devoted to the reasons for deviant behavior in megalopolis conditions. To explore the level of deviant behavior’ factors differentiation, the questionnaire was worked out, and sociological survey involved more than 1000 people from different districts of the city was conducted. Sociological survey allowed to study the socio-economical and psychological factors of deviant behavior. It also included the Moscow residents’ open-ended answers regarding the most actual problems in their districts and reasons of wish to leave their place. The results of sociological survey lead to the conclusion that the main factors of deviant behavior in Moscow are high level of social inequality, large number of illegal migrants and bums, nearness of large transport hubs and stations on the territory, ineffective work of police, alcohol availability and drug accessibility, low level of psychological comfort for Moscow citizens, large number of building projects.

Keywords: deviant behavior, megapolis, Moscow, urban environment, social stratification

Procedia PDF Downloads 181
381 Enhanced Functional Production of a Crucial Biomolecule Human Serum Albumin in Escherichia coli

Authors: Ashima Sharma

Abstract:

Human Serum Albumin (HSA)- one of the most demanded therapeutic proteins with immense biotechnological applications- is a large multidomain protein containing 17 disulfide bonds. The current source of HSA is human blood plasma which is a limited and unsafe source. Thus, there exists an indispensable need to promote non-animal derived recombinant HSA (rHSA) production. Escherichia coli is one of the most convenient hosts which had contributed to the production of more than 30% of the FDA approved recombinant pharmaceuticals. It grows rapidly and reaches high cell density using inexpensive and simple substrates. E. coli derived recombinant products have more economic potential as fermentation processes are cheaper compared to the other expression hosts. The major bottleneck in exploiting E. coli as a host for a disulfide-rich multidomain protein is the formation of aggregates of overexpressed protein. The majority of the expressed HSA forms inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA from inclusion bodies is not preferred because it is difficult to obtain a large multidomain disulfide bond rich protein like rHSA in its functional native form. Purification is tedious, time-consuming, laborious and expensive. Because of such limitations, the E. coli host system was neglected for rHSA production for the past few decades despite its numerous advantages. In the present work, we have exploited the capabilities of E. coli as a host for the enhanced functional production of rHSA (~60% of the total expressed rHSA in the soluble fraction). Parameters like intracellular environment, temperature, induction type, duration of induction, cell lysis conditions etc. which play an important role in enhancing the level of production of the desired protein in its native form in vivo have been optimized. We have studied the effect of assistance of different types of exogenously employed chaperone systems on the functional expression of rHSA in the E. coli host system. Different aspects of cell growth parameters during the production of rHSA in presence and absence of molecular chaperones in E. coli have also been studied. Upon overcoming the difficulties to produce functional rHSA in E. coli, it has been possible to produce significant levels of functional protein through engineering the biological system of protein folding in the cell, the E. coli-derived rHSA has been purified to homogeneity. Its detailed physicochemical characterization has been performed by monitoring its conformational properties, secondary and tertiary structure elements, surface properties, ligand binding properties, stability issues etc. These parameters of the recombinant protein have been compared with the naturally occurring protein from the human source. The outcome of the comparison reveals that the recombinant protein resembles exactly the same as the natural one. Hence, we propose that the E. coli-derived rHSA is an ideal biosimilar for human blood plasma-derived serum albumin. Therefore, in the present study, we have introduced and promoted the E. coli- derived rHSA as an alternative to the preparation from a human source, pHSA.

Keywords: recombinant human serum albumin, Escherichia coli, biosimilar, chaperone assisted protein folding

Procedia PDF Downloads 196
380 Implementation of a Distant Learning Physician Assistant Program in Northern Michigan to Address Health Care Provider Shortage: Importance of Evaluation

Authors: Theresa Bacon-Baguley, Martina Reinhold

Abstract:

Introduction: The purpose of this paper is to discuss the importance of both formative and summative evaluation of a Physician Assistant (PA) program with a distant campus delivered through Interactive Television (ITV) to assure equity of educational experiences. Methodology: A needs assessment utilizing a case-control design determined the need and interest in expanding the existing PA program to northern Michigan. A federal grant was written and funded, which supported the hiring of two full-time faculty members and support staff at the distant site. The strengths and weaknesses of delivering a program through ITV were evaluated using weekly formative evaluation, and bi-semester summative evaluation. Formative evaluation involved discussion of lecture content to be delivered, special ITV needs, orientation of new lecturers to the system, student concerns, support staff updates, and scheduling of student/faculty traveling between the two campuses. The summative evaluation, designed from a literature review of barriers to ITV, included 19 statements designed to evaluate the following items: quality of technology (audio, video, etc.), confidence in the ITV system, quality of instruction and instructor interaction between the two locations, and availability of resources at each location. In addition, students were given the opportunity to write qualitative remarks for each course delivered between the two locations. This summative evaluation was given to all students at mid-semester and at the end of the semester. The goal of the summative evaluation was to have 80% or greater of the students respond favorably (‘Very Good’ or ‘Good’) to each of the 19 statements. Results: Prior to the start of the first cohort at the distant campus, the technology was tested. During this time period, the formative evaluations identified key components needing modification, which were rapidly addressed: ability to record lectures, lighting, sound, and content delivery. When the mid-semester summative survey was given to the first cohort of students, 18 of the 19 statements in the summative evaluation met the goal of 80% or greater in the favorable category. When the summative evaluation statements were stratified by the two cohorts, the summative evaluation identified that students at the home location responded that they did not have adequate access to printers, and students at the expansion location responded that they did not have adequate access to library resources. These results allowed the program to address the deficiencies through contacting informational technology for additional printers, and to provide students with knowledge on how to access library resources. Conclusion: Successful expansion of programs to a distant site utilizing ITV technology requires extensive monitoring using both formative and summative evaluation. The formative evaluation allowed for quick identification of issues that could immediately be addressed, both at the planning and developing stage, as well as during implementation. Through use of the summative evaluation the program is able to monitor the success/ effectiveness of the expansion and identify specific needs of students at each location.

Keywords: assessment, distance learning, formative feedback, interactive television (ITV), student experience, summative feedback, support

Procedia PDF Downloads 231
379 The Effect of Post Spinal Hypotension on Cerebral Oxygenation Using Near-Infrared Spectroscopy and Neonatal Outcomes in Full Term Parturient Undergoing Lower Segment Caesarean Section: A Prospective Observational Study

Authors: Shailendra Kumar, Lokesh Kashyap, Puneet Khanna, Nishant Patel, Rakesh Kumar, Arshad Ayub, Kelika Prakash, Yudhyavir Singh, Krithikabrindha V.

Abstract:

Introduction: Spinal anesthesia is considered a standard anesthesia technique for caesarean delivery. The incidence of spinal hypotension during caesarean delivery is 70 -80%. Spinal hypotension may cause cerebral hypoperfusion in the mother, but physiologically cerebral autoregulatory mechanisms accordingly prevent cerebral hypoxia. Cerebral blood flow remains constant in the 50-150 mmHg of Cerebral Perfusion Pressure (CPP) range. Near-infrared spectroscopy (NIRS) is a non-invasive technology that is used to detect Cerebral Desaturation Events (CDEs) immediately compared to other conventional intraoperative monitoring techniques. Objective: The primary aim of the study is to correlate the change in cerebral oxygen saturation using NIRS with respect to a fall in mean blood pressure after spinal anaesthesia and to find out the effects of spinal hypotension on neonatal APGAR score, neonatal acid-base variations, and presence of Postoperative Delirium (POD). Methodology: NIRS sensors were attached to the forehead of all the patients, and their baseline readings of cerebral oxygenation on the right and left frontal regions and mean blood pressure were noted. Subarachnoid block was given with hyperbaric 0.5% bupivacaine plus fentanyl, the dose being determined by the individual anaesthesiologist. Co-loading of IV crystalloid solutions was given to the patient. Blood pressure reading and cerebral saturation were recorded every 1 minute till 30min. Hypotension was a fall in MAP less than 20% of the baseline values. Patients going for hypotension were treated with an IV Bolus of phenylephrine/ephedrine. Umbilical cord blood samples were taken for blood gas analysis, and neonatal APGAR was noted by a neonatologist. Study design: A prospective observational study conducted in a population of Thirty ASA 2 and 3 parturients scheduled for lower segment caesarean section (LSCS). Results: Mean fall in regional cerebral saturation is 28.48 ± 14.7% with respect to the mean fall in blood pressure 38.92 ± 8.44 mm Hg. The correlation coefficient between fall in saturation and fall in mean blood pressure is 0.057, and p-value {0.7} after subarachnoid block. A fall in regional cerebral saturation occurred 2±1 min before a fall in mean blood pressure. Twenty-nine out of thirty patients required vasopressors during hypotension. The first dose of vasopressor requirement is needed at 6.02±2 min after the block. The mean APGAR score was 7.86 and 9.74 at 1 and 5 min of birth, respectively, and the mean umbilical arterial pH of 7.3±0.1. According to DRS-98 (Delirium Rating Scale), the mean delirium rating score on postoperative day 1 and day 2 were 0.1 and 0.7, respectively. Discussion: There was a fall in regional cerebral oxygen saturation, which started before with respect to a significant fall in mean blood pressure readings but was statistically not significant. Maximal fall in blood pressure requiring vasopressors occurs within 10 min of SAB. Neonatal APGAR scores and acid-base variations were in the normal range with maternal hypotension, and there was no incidence of postoperative delirium in patients with post-spinal hypotension.

Keywords: cerebral oxygenation, LSCS, NIRS, spinal hypotension

Procedia PDF Downloads 59
378 Performance Analysis of Double Gate FinFET at Sub-10NM Node

Authors: Suruchi Saini, Hitender Kumar Tyagi

Abstract:

With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.

Keywords: current on-off ratio, FinFET, short-channel effects, transconductance

Procedia PDF Downloads 52
377 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs

Authors: Ludmila Polechonska, Agnieszka Klink

Abstract:

The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.

Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals

Procedia PDF Downloads 166
376 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 104
375 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft

Authors: Saurabh Sharma

Abstract:

Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.

Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete

Procedia PDF Downloads 58
374 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents

Authors: A. Kesraoui, M. Seffen

Abstract:

Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.

Keywords: adsorption, alternating current, dyes, modeling

Procedia PDF Downloads 142
373 Using the Structural Equation Model to Explain the Effect of Supervisory Practices on Regulatory Density

Authors: Jill Round

Abstract:

In the economic system, the financial sector plays a crucial role as an intermediary between market participants, other financial institutions, and customers. Financial institutions such as banks have to make decisions to satisfy the demands of all the participants by keeping abreast of regulatory change. In recent years, progress has been made regarding frameworks, development of rules, standards, and processes to manage risks in the banking sector. The increasing focus of regulators and policymakers placed on risk management, corporate governance, and the organization’s culture is of special interest as it requires a well-resourced risk controlling function, compliance function, and internal audit function. In the past years, the relevance of these functions that make up the so-called Three Lines of Defense has moved from the backroom to the boardroom. The approach of the model can vary based on the various organizational characteristics. Due to the intense regulatory requirements, organizations operating in the financial sector have more mature models. In less regulated industries there is more cloudiness about what tasks are allocated where. All parties strive to achieve their objectives through the effective management of risks and serve the identical stakeholders. Today, the Three Lines of Defense model is used throughout the world. The research looks at trends and emerging issues in the professions of the Three Lines of Defense within the banking sector. The answers are believed to helping to explain the increasing regulatory requirements for the banking sector. While the number of supervisory practices increases the risk management requirements intensify and demand more regulatory compliance at the same time. The Structural Equation Modeling (SEM) is applied by making use of conducted surveys in the research field. It aims to describe (i) the theoretical model regarding the applicable linearity relationships, (ii) the causal relationship between multiple predictors (exogenous) and multiple dependent variables (endogenous), (iii) taking into consideration the unobservable variables and (iv) the measurement errors. The surveys conducted on the research field suggest that the observable variables are caused by various latent variables. The SEM consists of the 1) measurement model and the 2) structural model. There is a detectable correlation regarding the cause-effect relationship among the performed supervisory practices and the increasing scope of regulation. Supervisory practices reinforce the regulatory density. In the past, controls were placed after supervisory practices were conducted or incidents occurred. In further research, it is of interest to examine, whether risk management is proactive, reactive to incidents and supervisory practices or can be both at the same time.

Keywords: risk management, structural equation model, supervisory practice, three lines of defense

Procedia PDF Downloads 205
372 E-Governance: A Key for Improved Public Service Delivery

Authors: Ayesha Akbar

Abstract:

Public service delivery has witnessed a significant improvement with the integration of information communication technology (ICT). It not only improves management structure with advanced technology for surveillance of service delivery but also provides evidence for informed decisions and policy. Pakistan’s public sector organizations have not been able to produce some good results to ensure service delivery. Notwithstanding, some of the public sector organizations in Pakistan has diffused modern technology and proved their credence by providing better service delivery standards. These good indicators provide sound basis to integrate technology in public sector organizations and shift of policy towards evidence based policy making. Rescue-1122 is a public sector organization which provides emergency services and proved to be a successful model for the provision of service delivery to save human lives and to ensure human development in Pakistan. The information about the organization has been received by employing qualitative research methodology. The information is broadly based on primary and secondary sources which includes Rescue-1122 website, official reports of organizations; UNDP (United Nation Development Program), WHO (World Health Organization) and by conducting 10 in-depth interviews with the high administrative staff of organizations who work in the Lahore offices. The information received has been incorporated with the study for the better understanding of the organization and their management procedures. Rescue-1122 represents a successful model in delivering the services in an efficient way to deal with the disaster management. The management of Rescue has strategized the policies and procedures in such a way to develop a comprehensive model with the integration of technology. This model provides efficient service delivery as well as maintains the standards of the organization. The service delivery model of rescue-1122 works on two fronts; front-office interface and the back-office interface. Back-office defines the procedures of operations and assures the compliance of the staff whereas, front-office equipped with the latest technology and good infrastructure handles the emergency calls. Both ends are integrated with satellite based vehicle tracking, wireless system, fleet monitoring system and IP camera which monitors every move of the staff to provide better services and to pinpoint the distortions in the services. The standard time of reaching to the emergency spot is 7 minutes, and during entertaining the case; driver‘s behavior, traffic volume and the technical assistance being provided to the emergency case is being monitored by front-office. Then the whole information get uploaded to the main dashboard of Lahore headquarter from the provincial offices. The latest technology is being materialized by Rescue-1122 for delivering the efficient services, investigating the flaws; if found, and to develop data to make informed decision making. The other public sector organizations of Pakistan can also develop such models to integrate technology for improving service delivery and to develop evidence for informed decisions and policy making.

Keywords: data, e-governance, evidence, policy

Procedia PDF Downloads 228
371 Co-management Organizations: A Way to Facilitate Sustainable Management of the Sundarbans Mangrove Forests of Bangladesh

Authors: Md. Wasiul Islam, Md. Jamius Shams Sowrov

Abstract:

The Sundarbans is the largest single tract of mangrove forest in the world. This is located in the southwest corner of Bangladesh. This is a unique ecosystem which is a great breeding and nursing ground for a great biodiversity. It supports the livelihood of about 3.5 million coastal dwellers and also protects the coastal belt and inland areas from various natural calamities. Historically, the management of the Sundarbans was controlled by the Bangladesh Forest Department following top-down approach without the involvement of local communities. Such fence and fining-based blue-print approach was not effective to protect the forest which caused Sundarbans to degrade severely in the recent past. Fifty percent of the total tree cover has been lost in the last 30 years. Therefore, local multi-stakeholder based bottom-up co-management approach was introduced at some of the parts of the Sundarbans in 2006 to improve the biodiversity status by enhancing the protection level of the forest. Various co-management organizations were introduced under co-management approach where the local community people could actively involve in various activities related to the management and welfare of the Sundarbans including the decision-making process to achieve the goal. From this backdrop, the objective of the study was to assess the performance of co-management organizations to facilitate sustainable management of the Sundarbans mangrove forests. The qualitative study followed face-to-face interview to collect data using two sets of semi-structured questionnaires. A total of 40 respondents participated in the research that was from eight villagers under two forest ranges. 32 representatives from the local communities as well as 8 official representatives involved in co-management approach were interviewed using snowball sampling technique. The study shows that the co-management approach improved governance system of the Sundarbans through active participation of the local community people and their interactions with the officials via the platform of co-management organizations. It facilitated accountability and transparency system to some extent through following some formal and informal rules and regulations. It also improved the power structure of the management process by fostering local empowerment process particularly the women. Moreover, people were able to learn from their interactions with and within the co-management organizations as well as interventions improved environmental awareness and promoted social learning. The respondents considered good governance as the most important factor for achieving the goal of sustainable management and biodiversity conservation of the Sundarbans. The success of co-management planning process also depends on the active and functional participation of different stakeholders including the local communities where co-management organizations were considered as the most functional platform. However, the governance system was also facing various challenges which resulted in barriers to the sustainable management of the Sundarbans mangrove forest. But still there were some members involved in illegal forest operations and created obstacles against sustainable management of the Sundarbans. Respondents recommended greater patronization from the government, financial and logistic incentives for alternative income generation opportunities with effective participatory monitoring and evaluation system to improve sustainable management of the Sundarbans.

Keywords: Bangladesh, co-management approach, co-management organizations, governance, Sundarbans, sustainable management

Procedia PDF Downloads 159
370 Risking Injury: Exploring the Relationship between Risk Propensity and Injuries among an Australian Rules Football Team

Authors: Sarah A. Harris, Fleur L. McIntyre, Paola T. Chivers, Benjamin G. Piggott, Fiona H. Farringdon

Abstract:

Australian Rules Football (ARF) is an invasion based, contact field sport with over one million participants. The contact nature of the game increases exposure to all injuries, including head trauma. Evidence suggests that both concussion and sub-concussive traumas such as head knocks may damage the brain, in particular the prefrontal cortex. The prefrontal cortex may not reach full maturity until a person is in their early twenties with males taking longer to mature than females. Repeated trauma to the pre-frontal cortex during maturation may lead to negative social, cognitive and emotional effects. It is also during this period that males exhibit high levels of risk taking behaviours. Risk propensity and the incidence of injury is an unexplored area of research. Little research has considered if the level of player’s (especially younger players) risk propensity in everyday life places them at an increased risk of injury. Hence the current study, investigated if a relationship exists between risk propensity and self-reported injuries including diagnosed concussion and head knocks, among male ARF players aged 18 to 31 years. Method: The study was conducted over 22 weeks with one West Australian Football League (WAFL) club during the 2015 competition. Pre-season risk propensity was measured using the 7-item self-report Risk Propensity Scale. Possible scores ranged from 9 to 63, with higher scores indicating higher risk propensity. Players reported their self-perceived injuries (concussion, head knocks, upper body and lower body injuries) fortnightly using the WAFL Injury Report Survey (WIRS). A unique ID code was used to ensure player anonymity, which also enabled linkage of survey responses and injury data tracking over the season. A General Linear Model (GLM) was used to analyse whether there was a relationship between risk propensity score and total number of injuries for each injury type. Results: Seventy one players (N=71) with an age range of 18.40 to 30.48 years and a mean age of 21.92 years (±2.96 years) participated in the study. Player’s mean risk propensity score was 32.73, SD ±8.38. Four hundred and ninety five (495) injuries were reported. The most frequently reported injury was head knocks representing 39.19% of total reported injuries. The GLM identified a significant relationship between risk propensity and head knocks (F=4.17, p=.046). No other injury types were significantly related to risk propensity. Discussion: A positive relationship between risk propensity and head trauma in contact sports (specifically WAFL) was discovered. Assessing player’s risk propensity therefore, may identify those more at risk of head injuries. Potentially leading to greater monitoring and education of these players throughout the season, regarding self-identification of head knocks and symptoms that may indicate trauma to the brain. This is important because many players involved in WAFL are in their late teens or early 20’s hence, may be at greater risk of negative outcomes if they experience repeated head trauma. Continued education and research into the risks associated with head injuries has the potential to improve player well-being.

Keywords: football, head injuries, injury identification, risk

Procedia PDF Downloads 321
369 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 160
368 Exploring Tweeters’ Concerns and Opinions about FIFA Arab Cup 2021: An Investigation Study

Authors: Md. Rafiul Biswas, Uzair Shah, Mohammad Alkayal, Zubair Shah, Othman Althawadi, Kamila Swart

Abstract:

Background: Social media platforms play a significant role in the mediated consumption of sport, especially so for sport mega-event. The characteristics of Twitter data (e.g., user mentions, retweets, likes, #hashtag) accumulate the users in one ground and spread information widely and quickly. Analysis of Twitter data can reflect the public attitudes, behavior, and sentiment toward a specific event on a larger scale than traditional surveys. Qatar is going to be the first Arab country to host the mega sports event FIFA World Cup 2022 (Q22). Qatar has hosted the FIFA Arab Cup 2021 (FAC21) to serve as a preparation for the mega-event. Objectives: This study investigates public sentiments and experiences about FAC21 and provides an insight to enhance the public experiences for the upcoming Q22. Method: FCA21-related tweets were downloaded using Twitter Academic research API between 01 October 2021 to 18 February 2022. Tweets were divided into three different periods: before T1 (01 Oct 2021 to 29 Nov 2021), during T2 (30 Nov 2021 -18 Dec 2021), and after the FAC21 T3 (19 Dec 2021-18 Feb 2022). The collected tweets were preprocessed in several steps to prepare for analysis; (1) removed duplicate and retweets, (2) removed emojis, punctuation, and stop words (3) normalized tweets using word lemmatization. Then, rule-based classification was applied to remove irrelevant tweets. Next, the twitter-XLM-roBERTa-base model from Huggingface was applied to identify the sentiment in the tweets. Further, state-of-the-art BertTopic modeling will be applied to identify trending topics over different periods. Results: We downloaded 8,669,875 Tweets posted by 2728220 unique users in different languages. Of those, 819,813 unique English tweets were selected in this study. After splitting into three periods, 541630, 138876, and 139307 were from T1, T2, and T3, respectively. Most of the sentiments were neutral, around 60% in different periods. However, the rate of negative sentiment (23%) was high compared to positive sentiment (18%). The analysis indicates negative concerns about FAC21. Therefore, we will apply BerTopic to identify public concerns. This study will permit the investigation of people’s expectations before FAC21 (e.g., stadium, transportation, accommodation, visa, tickets, travel, and other facilities) and ascertain whether these were met. Moreover, it will highlight public expectations and concerns. The findings of this study can assist the event organizers in enhancing implementation plans for Q22. Furthermore, this study can support policymakers with aligning strategies and plans to leverage outstanding outcomes.

Keywords: FIFA Arab Cup, FIFA, Twitter, machine learning

Procedia PDF Downloads 82
367 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 78
366 The Evaluation of the Cognitive Training Program for Older Adults with Mild Cognitive Impairment: Protocol of a Randomized Controlled Study

Authors: Hui-Ling Yang, Kuei-Ru Chou

Abstract:

Background: Studies show that cognitive training can effectively delay cognitive failure. However, there are several gaps in the previous studies of cognitive training in mild cognitive impairment: 1) previous studies enrolled mostly healthy older adults, with few recruiting older adults with cognitive impairment; 2) they also had limited generalizability and lacked long-term follow-up data and measurements of the activities of daily living functional impact. Moreover, only 37% were randomized controlled trials (RCT). 3) Limited cognitive training has been specifically developed for mild cognitive impairment. Objective: This study sought to investigate the changes in cognitive function, activities of daily living and degree of depressive symptoms in older adults with mild cognitive impairment after cognitive training. Methods: This double-blind randomized controlled study has a 2-arm parallel group design. Study subjects are older adults diagnosed with mild cognitive impairment in residential care facilities. 124 subjects will be randomized by the permuted block randomization, into intervention group (Cognitive training, CT), or active control group (Passive information activities, PIA). Therapeutic adherence, sample attrition rate, medication compliance and adverse events will be monitored during the study period, and missing data analyzed using intent-to-treat analysis (ITT). Results: Training sessions of the CT group are 45 minutes/day, 3 days/week, for 12 weeks (36 sessions each). The training of active control group is the same as CT group (45min/day, 3days/week, for 12 weeks, for a total of 36 sessions). The primary outcome is cognitive function, using the Mini-Mental Status Examination (MMSE); the secondary outcome indicators are: 1) activities of daily living, using the Lawton’s Instrumental Activities of Daily Living (IADLs) and 2) degree of depressive symptoms, using the Geriatric Depression Scale-Short form (GDS-SF). Latent growth curve modeling will be used in the repeated measures statistical analysis to estimate the trajectory of improvement by examining the rate and pattern of change in cognitive functions, activities of daily living and degree of depressive symptoms for intervention efficacy over time, and the effects will be evaluated immediate post-test, 3 months, 6 months and one year after the last session. Conclusions: We constructed a rigorous CT program adhering to the Consolidated Standards of Reporting Trials (CONSORT) reporting guidelines. We expect to determine the improvement in cognitive function, activities of daily living and degree of depressive symptoms of older adults with mild cognitive impairment after using the CT.

Keywords: mild cognitive impairment, cognitive training, randomized controlled study

Procedia PDF Downloads 431
365 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 126
364 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 79
363 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 65
362 Leveraging Advanced Technologies and Data to Eliminate Abandoned, Lost, or Otherwise Discarded Fishing Gear and Derelict Fishing Gear

Authors: Grant Bifolchi

Abstract:

As global environmental problems continue to have highly adverse effects, finding long-term, sustainable solutions to combat ecological distress are of growing paramount concern. Ghost Gear—also known as abandoned, lost or otherwise discarded fishing gear (ALDFG) and derelict fishing gear (DFG)—represents one of the greatest threats to the world’s oceans, posing a significant hazard to human health, livelihoods, and global food security. In fact, according to the UN Food and Agriculture Organization (FAO), abandoned, lost and discarded fishing gear represents approximately 10% of marine debris by volume. Around the world, many governments, governmental and non-profit organizations are doing their best to manage the reporting and retrieval of nets, lines, ropes, traps, floats and more from their respective bodies of water. However, these organizations’ ability to effectively manage files and documents about the environmental problem further complicates matters. In Ghost Gear monitoring and management, organizations face additional complexities. Whether it’s data ingest, industry regulations and standards, garnering actionable insights into the location, security, and management of data, or the application of enforcement due to disparate data—all of these factors are placing massive strains on organizations struggling to save the planet from the dangers of Ghost Gear. In this 90-minute educational session, globally recognized Ghost Gear technology expert Grant Bifolchi CET, BBA, Bcom, will provide real-world insight into how governments currently manage Ghost Gear and the technology that can accelerate success in combatting ALDFG and DFG. In this session, attendees will learn how to: • Identify specific technologies to solve the ingest and management of Ghost Gear data categories, including type, geo-location, size, ownership, regional assignment, collection and disposal. • Provide enhanced access to authorities, fisheries, independent fishing vessels, individuals, etc., while securely controlling confidential and privileged data to globally recognized standards. • Create and maintain processing accuracy to effectively track ALDFG/DFG reporting progress—including acknowledging receipt of the report and sharing it with all pertinent stakeholders to ensure approvals are secured. • Enable and utilize Business Intelligence (BI) and Analytics to store and analyze data to optimize organizational performance, maintain anytime-visibility of report status, user accountability, scheduling, management, and foster governmental transparency. • Maintain Compliance Reporting through highly defined, detailed and automated reports—enabling all stakeholders to share critical insights with internal colleagues, regulatory agencies, and national and international partners.

Keywords: ghost gear, ALDFG, DFG, abandoned, lost or otherwise discarded fishing gear, data, technology

Procedia PDF Downloads 81
361 Building an Opinion Dynamics Model from Experimental Data

Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.

Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule

Procedia PDF Downloads 101
360 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy

Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury

Abstract:

Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.

Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling

Procedia PDF Downloads 56
359 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 208
358 Integrated Planning, Designing, Development and Management of Eco-Friendly Human Settlements for Sustainable Development of Environment, Economic, Peace and Society of All Economies

Authors: Indra Bahadur Chand

Abstract:

This paper will focus on the need for development and application of global protocols and policy in planning, designing, development, and management of systems of eco-towns and eco-villages so that sustainable development will be assured from the perspective of environmental, economical, peace, and harmonized social dynamics. This perspective is essential for the development of civilized and eco-friendly human settlements in the town and rural areas of the nation that will be a milestone for developing a happy and sustainable lifestyle of rural and urban communities of the nation. The urban population of most of the town of developing economies has been tremendously increasing, whereas rural people have been tremendously migrating for the past three decades. Consequently, the urban lifestyle in most towns has stressed in terms of environmental pollution, water crisis, congested traffic, energy crisis, food crisis, and unemployment. Eco-towns and villages should be developed where lifestyle of all residents is sustainable and happy. Built up environment of settlement should reduce and minimize the problems of non ecological CO2 emissions, unbalanced utilization of natural resources, environmental degradation, natural calamities, ecological imbalance, energy crisis, water scarcity, waste management, food crisis, unemployment, deterioration of cultural heritage, social, the ratio among the public and private land ownership, ratio of land covered with vegetation and area of settlement, the ratio of people in the vehicles and foot, the ratio of people employed outside of town and village, ratio of resources recycling of waste materials, water consumption level, the ratio of people and vehicles, ratio of the length of the road network and area of town/villages, a ratio of renewable energy consumption with total energy, a ratio of religious/recreational area out of the total built-up area, the ratio of annual suicide case out of total people, a ratio of annual injured and death out of total people from a traffic accident, a ratio of production of agro foods within town out of total food consumption will be used to assist in designing and monitoring of each eco-towns and villages. An eco-town and villages should be planned and developed to offer sustainable infrastructure and utilities that maintain CO2 level in individual homes and settlements, home energy use, transport, food and consumer goods, water supply, waste management, conservation of historical heritages, healthy neighborhood, conservation of natural landscape, conserving bio-diversity and developing green infrastructures. Eco-towns and villages should be developed on the basis of master planning and architecture that affect and define the settlement and its form. Master planning and engineering should focus in delivering the sustainability criteria of eco towns and eco village. This will involve working with specific landscape and natural resources of locality.

Keywords: eco-town, ecological habitation, master plan, sustainable development

Procedia PDF Downloads 166
357 W-WING: Aeroelastic Demonstrator for Experimental Investigation into Whirl Flutter

Authors: Jiri Cecrdle

Abstract:

This paper describes the concept of the W-WING whirl flutter aeroelastic demonstrator. Whirl flutter is the specific case of flutter that accounts for the additional dynamic and aerodynamic influences of the engine rotating parts. The instability is driven by motion-induced unsteady aerodynamic propeller forces and moments acting in the propeller plane. Whirl flutter instability is a serious problem that may cause the unstable vibration of a propeller mounting, leading to the failure of an engine installation or an entire wing. The complicated physical principle of whirl flutter required the experimental validation of the analytically gained results. W-WING aeroelastic demonstrator has been designed and developed at Czech Aerospace Research Centre (VZLU) Prague, Czechia. The demonstrator represents the wing and engine of the twin turboprop commuter aircraft. Contrary to the most of past demonstrators, it includes a powered motor and thrusting propeller. It allows the changes of the main structural parameters influencing the whirl flutter stability characteristics. Propeller blades are adjustable at standstill. The demonstrator is instrumented by strain gauges, accelerometers, revolution-counting impulse sensor, sensor of airflow velocity, and the thrust measurement unit. Measurement is supported by the in house program providing the data storage and real-time depiction in the time domain as well as pre-processing into the form of the power spectral densities. The engine is linked with a servo-drive unit, which enables maintaining of the propeller revolutions (constant or controlled rate ramp) and monitoring of immediate revolutions and power. Furthermore, the program manages the aerodynamic excitation of the demonstrator by the aileron flapping (constant, sweep, impulse). Finally, it provides the safety guard to prevent any structural failure of the demonstrator hardware. In addition, LMS TestLab system is used for the measurement of the structure response and for the data assessment by means of the FFT- and OMA-based methods. The demonstrator is intended for the experimental investigations in the VZLU 3m-diameter low-speed wind tunnel. The measurement variant of the model is defined by the structural parameters: pitch and yaw attachment stiffness, pitch and yaw hinge stations, balance weight station, propeller type (duralumin or steel blades), and finally, angle of attack of the propeller blade 75% section (). The excitation is provided either by the airflow turbulence or by means of the aerodynamic excitation by the aileron flapping using a frequency harmonic sweep. The experimental results are planned to be utilized for validation of analytical methods and software tools in the frame of development of the new complex multi-blade twin-rotor propulsion system for the new generation regional aircraft. Experimental campaigns will include measurements of aerodynamic derivatives and measurements of stability boundaries for various configurations of the demonstrator.

Keywords: aeroelasticity, flutter, whirl flutter, W WING demonstrator

Procedia PDF Downloads 80
356 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 115
355 Corrosion Protection and Failure Mechanism of ZrO₂ Coating on Zirconium Alloy Zry-4 under Varied LiOH Concentrations in Lithiated Water at 360°C and 18.5 MPa

Authors: Guanyu Jiang, Donghai Xu, Huanteng Liu

Abstract:

After the Fukushima-Daiichi accident, the development of accident tolerant fuel cladding materials to improve reactor safety has become a hot topic in the field of nuclear industry. ZrO₂ has a satisfactory neutron economy and can guarantee the fission chain reaction process, which enables it to be a promising coating for zirconium alloy cladding. Maintaining a good corrosion resistance in primary coolant loop during normal operations of Pressurized Water Reactors is a prerequisite for ZrO₂ as a protective coating on zirconium alloy cladding. Research on the corrosion performance of ZrO₂ coating in nuclear water chemistry is relatively scarce, and existing reports failed to provide an in-depth explanation for the failure causes of ZrO₂ coating. Herein, a detailed corrosion process of ZrO₂ coating in lithiated water at 360 °C and 18.5 MPa was proposed based on experimental research and molecular dynamics simulation. Lithiated water with different LiOH solutions in the present work was deaerated and had a dissolved oxygen concentration of < 10 ppb. The concentration of Li (as LiOH) was determined to be 2.3 ppm, 70 ppm, and 500 ppm, respectively. Corrosion tests were conducted in a static autoclave. Modeling and corresponding calculations were operated on Materials Studio software. The calculation of adsorption energy and dynamics parameters were undertaken by the Energy task and Dynamics task of the Forcite module, respectively. The protective effect and failure mechanism of ZrO₂ coating on Zry-4 under varied LiOH concentrations was further revealed by comparison with the coating corrosion performance in pure water (namely 0 ppm Li). ZrO₂ coating provided a favorable corrosion protection with the occurrence of localized corrosion at low LiOH concentrations. Factors influencing corrosion resistance mainly include pitting corrosion extension, enhanced Li+ permeation, short-circuit diffusion of O²⁻ and ZrO₂ phase transformation. In highly-concentrated LiOH solutions, intergranular corrosion, internal oxidation, and perforation resulted in coating failure. Zr ions were released to coating surface to form flocculent ZrO₂ and ZrO₂ clusters due to the strong diffusion and dissolution tendency of α-Zr in the Zry-4 substrate. Considering that primary water of Pressurized Water Reactors usually includes 2.3 ppm Li, the stability of ZrO₂ make itself a candidate fuel cladding coating material. Under unfavorable conditions with high Li concentrations, more boric acid should be added to alleviate caustic corrosion of ZrO₂ coating once it is used. This work can provide some references to understand the service behavior of nuclear coatings under variable water chemistry conditions and promote the in-pile application of ZrO₂ coating.

Keywords: ZrO₂ coating, Zry-4, corrosion behavior, failure mechanism, LiOH concentration

Procedia PDF Downloads 62
354 Novel EGFR Ectodomain Mutations and Resistance to Anti-EGFR and Radiation Therapy in H&N Cancer

Authors: Markus Bredel, Sindhu Nair, Hoa Q. Trummell, Rajani Rajbhandari, Christopher D. Willey, Lewis Z. Shi, Zhuo Zhang, William J. Placzek, James A. Bonner

Abstract:

Purpose: EGFR-targeted monoclonal antibodies (mAbs) provide clinical benefit in some patients with H&N squamous cell carcinoma (HNSCC), but others progress with minimal response. Missense mutations in the EGFR ectodomain (ECD) can be acquired under mAb therapy by mimicking the effect of large deletions on receptor untethering and activation. Little is known about the contribution of EGFR ECD mutations to EGFR activation and anti-EGFR response in HNSCC. Methods: We selected patient-derived HNSCC cells (UM-SCC-1) for resistance to mAb Cetuximab (CTX) by repeated, stepwise exposure to mimic what may occur clinically and identified two concurrent EGFR ECD mutations (UM-SCC-1R). We examined the competence of the mutants to bind EGF ligand or CTX. We assessed the potential impact of the mutations through visual analysis of space-filling models of the native sidechains in the original structures vs. their respective side-chain mutations. We performed CRISPR in combination with site-directed mutagenesis to test for the effect of the mutants on ligand-independent EGFR activation and sorting. We determined the effects on receptor internalization, endocytosis, downstream signaling, and radiation sensitivity. Results: UM-SCC-1R cells carried two non-synonymous missense mutations (G33S and N56K) mapping to domain I in or near the EGF binding pocket of the EGFR ECD. Structural modeling predicted that these mutants restrict the adoption of a tethered, inactive EGFR conformation while not permitting association of EGFR with the EGF ligand or CTX. Binding studies confirmed that the mutant, untethered receptor displayed a reduced affinity for both EGF and CTX but demonstrated sustained activation and presence at the cell surface with diminished internalization and sorting for endosomal degradation. Single and double-mutant models demonstrated that the G33S mutant is dominant over the N56K mutant in its effect on EGFR activation and EGF binding. CTX-resistant UM-SCC-1R cells demonstrated cross-resistance to mAb Panitumuab but, paradoxically, remained sensitive to the reversible receptor tyrosine kinase inhibitor Erlotinib. Conclusions: HNSCC cells can select for EGFR ECD mutations under EGFR mAb exposure that converge to trap the receptor in an open, constitutively activated state. These mutants impede the receptor’s competence to bind mAbs and EGF ligand and alter its endosomal trafficking, possibly explaining certain cases of clinical mAb and radiation resistance.

Keywords: head and neck cancer, EGFR mutation, resistance, cetuximab

Procedia PDF Downloads 76
353 The French Ekang Ethnographic Dictionary. The Quantum Approach

Authors: Henda Gnakate Biba, Ndassa Mouafon Issa

Abstract:

Dictionaries modeled on the Western model [tonic accent languages] are not suitable and do not account for tonal languages phonologically, which is why the [prosodic and phonological] ethnographic dictionary was designed. It is a glossary that expresses the tones and the rhythm of words. It recreates exactly the speaking or singing of a tonal language, and allows the non-speaker of this language to pronounce the words as if they were a native. It is a dictionary adapted to tonal languages. It was built from ethnomusicological theorems and phonological processes, according to Jean. J. Rousseau 1776 hypothesis /To say and to sing were once the same thing/. Each word in the French dictionary finds its corresponding language, ekaη. And each word ekaη is written on a musical staff. This ethnographic dictionary is also an inventive, original and innovative research thesis, but it is also an inventive, original and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation and the question of modeling in the human sciences: mathematics, computer science, translation automation and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal and random music. The experimentation confirming the theorization designed a semi-digital, semi-analog application which translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music and deterministic and random music). To test this application, I use a music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). Translation is done (from writing to writing, from writing to speech and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz and, world music or, variety etc. The software runs, giving you the option to choose harmonies, and then you select your melody.

Keywords: music, language, entenglement, science, research

Procedia PDF Downloads 52