Search results for: tourist decision-making process
2131 Transformation of Antitrust Policy against Collusion in Russia and Transition Economies
Authors: Andrey Makarov
Abstract:
This article will focus on the development of antitrust policy in transition economies in the context of preventing explicit and tacit collusion. Experience of BRICS, CIS (Ukraine, Kazakhstan) and CEE countries (Bulgaria, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, Czech Republic, Estonia) in the creation of antitrust institutions was analyzed, including both legislation and enforcement practice. Most of these countries in the early 90th were forced to develop completely new legislation in the field of protection of competition and it is important to compare different ways of building antitrust institutions and policy results. The article proposes a special approach to evaluation of preventing collusion mechanisms. This approach takes into account such enforcement problems as: classification problems (tacit vs explicit collusion, vertical vs horizontal agreements), flexibility of prohibitions (the balance between “per se” vs “rule of reason” approaches de jure and in practice), design of sanctions, private enforcement challenge, leniency program mechanisms, the role of antitrust authorities etc. The analysis is conducted using both official data, published by competition authorities, and expert assessments. The paper will show how the integration process within the EU predetermined some aspects of the development of antitrust policy in CEE countries, including the trend of the use of "rule of reason" approach. Simultaneously was analyzed the experience of CEE countries in special mechanisms of government intervention. CIS countries in the development of antitrust policy followed more or less original ways, without such a great impact from the European Union, more attention will be given to Russian experience in this field, including the analysis of judicial decisions in antitrust cases. Main problems and challenges for transition economies in this field will be shown, including: Legal uncertainty problem; Problem of rigidity of prohibitions; Enforcement priorities of the regulator; Interaction of administrative and criminal law, limited effectiveness of criminal sanctions in the antitrust field; The effectiveness of leniency program design; Private enforcement challenge.Keywords: collusion, antitrust policy, leniency program, transition economies, Russia, CEE
Procedia PDF Downloads 4462130 WILCKO-PERIO, Periodontally Accelerated Orthodontics
Authors: Kruttika Bhuse
Abstract:
Aim: Synergism between periodontists and orthodontists (periodontal accelerated osteogenic orthodontics- PAOO) creates crucial opportunities to enhance clinical outcomes of combined therapies regarding both disciplines and has made adult orthodontics a reality. Thus, understanding the biomechanics of bone remodelling may increase the clinical applications of corticotomy facilitated orthodontics with or without alveolar augmentation. Wilckodontics can be an attractive treatment option and be a “win-win” situation for both the dental surgeon and patient by reducing the orthodontic treatment time in adults. Materials and methods: In this review, data related to the clinical aspects, steps of procedure, biomechanics of bone, indications and contraindications and final outcome of wilckodontic shall be discussed. 50 supporting articles from various international journals and 70 clinical cases were reviewed to get a better understanding to design this wilckodontic - meta analysis. Various journals like the Journal Of Clinical And Diagnostic Research, Journal Of Indian Society Of Periodontology, Journal Of Periodontology, Pubmed, Boston Orthodontic University Journal, Good Practice Orthodontics Volume 2, have been referred to attain valuable information on wilckodontics which was then compiled in this single review study. Result: As a promising adjuvant technique based on the transient nature of demineralization-remineralisation process in healthy tissues, wilckodontics consists of regional acceleratory phenomenon by alveolar corticotomy and bone grafting of labial and palatal/lingual surfaces, followed by orthodontic force. The surgical wounding of alveolar bone potentiates tissue reorganization and healing by a way of transient burst of localized hard and soft tissue remodelling.This phenomenon causes bone healing to occur 10-50 times faster than normal bone turnover. Conclusion: This meta analysis helps understanding that the biomechanics of bone remodelling may increase the clinical applications of corticotomy facilitated orthodontics with or without alveolar augmentation. The main benefits being reduced orthodontic treatment time, increased bone volume and post-orthodontic stability.Keywords: periodontal osteogenic accelerated orthodontics, alveolar corticotomy, bone augmentation, win-win situation
Procedia PDF Downloads 3922129 Influence of Morphology and Coatings in the Tribological Behavior of a Texturised Deterministic Surface by Photochemical Machining
Authors: Juan C. Sanchez, Jose L. Endrino, Alejandro Toro, Hugo A. Estupinan, Glenn Leighton
Abstract:
For years, the reduction of friction and wear has been a matter of interest in the engineering field. Several solutions have been proposed to address this issue, including the use of lubricants and coatings to reduce the frictional forces and to increase the surface wear resistance. Alternatively, texturing processes have been used in a wide variety of materials, in many cases inspired in natural surfaces. Nature has shown how species adapt to the environment and the engineers try to understand natural surfaces for particular applications by analyzing outstanding species such as gecko for high adhesion, lotus leaves for hydrophobicity, sharks for reduced flow resistance and snakes for optimized frictional response. Texturized surfaces have shown a superior performance in terms of the frictional response in many situations, and the control of its behavior greatly depends on the manufacturing process. The focus of this work is to evaluate the tribological behavior of AISI 52100 steel samples texturized by Photochemical Machining (PCM). The surface texture was inspired by several features of the snakeskin such as aspect ratio of fibrils and mean fibril spacing. Two coatings were applied on the texturized surface, namely Diamond-like Carbon (DLC) and Molybdenum Disulphide (MoS₂), and their tribological behavior after pin-on-disk tests were compared with that of the non-texturized and uncovered surfaces. The samples were characterised through Stereoscopic Microscope (SM), Scanning Electron Microscope (SEM), Optical Microscope (OM), Profilometer, Raman Spectrometer (RS) and X-Ray Diffractometer (XRD). The Coefficient of Friction (COF) measured in pin-on-disk tests showed correlations with the sliding direction (relative to the texture features) and the aspect ratio of the texture features. Regarding the coated surfaces, the DLC and MoS₂ coating had a good performance in terms of wear rate and coefficient of friction compared with the uncoated and non-texturized surfaces. On the other hand, for the uncoated surfaces, the texture showed an influence in the tribological performance with respect to the non-texturized surface.Keywords: coating, coefficient of friction, deterministic surface, photochemical machining
Procedia PDF Downloads 1492128 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution
Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud
Abstract:
In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch
Procedia PDF Downloads 4212127 Beyond Chol Soo Lee’s Death Row Release: Transinstitutionalization, Mortification, and the Limits of Legal Activism in 20th Century America
Authors: Minhae Shim Roth
Abstract:
The “Deinstitutionalization movement” refers to the spatial transition in the United States during the mid-20th century when the treatment of mental illness purportedly moved from long-term psychiatric institutions to community integrated care. Contrary to the accepted narrative of mental health care in the U.S., asylums did not close or empty. Some remained psychiatric hospitals, which came to be called forensic hospitals or state hospitals; others were converted into prisons or carceral institutions. During Deinstitutionalization, the asylum system became an appendage of the carceral system, with state hospitals becoming little more than holding centers for prisoners who were civilly committed, those incompetent to stand trial, offenders with mental health issues, and those found not guilty by reason of insanity. Psychiatric patients who became prisoners and prisoners who became patients became entangled in the phenomenon called transinstitutionalization. This paper investigates the relationship between psychiatric and criminal incarceration in 20th century California and focuses particularly on the case of Korean-American Chol Soo Lee, who fought detention in the psychiatric-prison system through the writ of habeas corpus. This study uses methodologies like critical theory, close reading, and archival research. This paper argues that during his psychiatric hospitalization at Napa State Hospital and incarceration in the California Department of Corrections, Lee underwent what sociologist Erving Goffman coined in his 1960 text Asylums as the process of “mortification.” After a burst of Asian American solidarity and legal aid that resulted in Lee’s triumphant release from Death Row in 1983 through a writ of habeas corpus, Lee struggled in the free world due to the long-lasting consequences of institutionalization, which led to alienation, recidivism, and an early death at the age of 62. This paper examines the trajectory of Lee’s trial and the legal activism behind it within the context of Goffman’s theory of total institutions and offer a nuanced reading of Lee’s case both during and after his incarceration.Keywords: criminal justice, criminal law, law and mental capacity, habeas corpus, deinstitutionalization, mental health
Procedia PDF Downloads 332126 Effects of Rations with High Amount of Crude Fiber on Rumen Fermentation in Suckler Cows
Authors: H. Scholz, P. Kuehne, G. Heckenberger
Abstract:
Problems during the calving period (December until May) often are results in a high body condition score (BCS) at this time. At the end of the grazing period (frequently after early weaning), however, an increase of BCS can often be observed under German conditions. In the last eight weeks before calving, the body condition should be reduced or at least not increased. Rations with a higher amount of crude fiber can be used (rations with straw or late mowed grass silage). Fermentative digestion of fiber is slow and incomplete; that’s why the fermentative process in the rumen can be reduced over a long feeding time. Viewed in this context, feed intake of suckler cows (8 weeks before calving) in different rations and fermentation in the rumen should be checked by taking rumen fluid. Eight suckler cows (Charolais) were feeding a Total Mixed Ration (TMR) in the last eight weeks before calving and grass silage after calving. By the addition of straw (30 % [TMR1] vs. 60 % [TMR2] of dry matter) was varied the amount of crude fiber in the TMR (grass silage, straw, mineral) before calving. After calving of the cow's grass, silage [GS] was fed ad libitum, and the last measurement of rumen fluid took place on the pasture [PS]. Rumen fluid, plasma, body weight, and backfat thickness were collected. Rumen fluid pH was assessed using an electronic pH meter. Volatile fatty acids (VFA), sedimentation, methylene-blue, and amount of infusorians were measured. From these 4 parameters, an “index of rumen fermentation” [IRF] in the rumen was formed. Fixed effects of treatment (TMR1, TMR2, GS, and PS) and a number of lactations (3-7 lactations) were analyzed by ANOVA using SPSS Version 25.0 (significant by p ≤ 5 %). Rumen fluid pH was significantly influenced by variants (TMR 1 by 6.6; TMR 2 by 6.9; GS by 6.6 and PS by 6.9) but was not affected by other effects. The IRF showed disturbed fermentation in the rumen by feeding the TMR 1+2 with a high amount of crude fiber (Score: > 10.0 points) and a very good environment for fermentation during grazing the pasture (Score: 6.9 points). Furthermore, significant differences were found for VFA, methylene blue, and the number of infusorians. The use of rations with a high amount of crude fiber from weaning to calving may cause deviations from undisturbed fermentation in the rumen and adversely affect the utilization of the feed in the rumen.Keywords: rumen fermentation, suckler cow, digestibility organic matter, crude fiber
Procedia PDF Downloads 1442125 Development of a Wound Dressing Material Based on Microbial Polyhydroxybutyrate Electrospun Microfibers Containing Curcumin
Authors: Ariel Vilchez, Francisca Acevedo, Rodrigo Navia
Abstract:
The wound healing process can be accelerated and improved by the action of antioxidants such as curcumin (Cur) over the tissues; however, the efficacy of curcumin used through the digestive system is not enough to exploit its benefits. Electrospinning presents an alternative to carry curcumin directly to the wounds, and polyhydroxybutyrate (PHB) is proposed as the matrix to load curcumin owing to its biodegradable and biocompatible properties. PHB is among 150 types of Polyhydroxyalkanoates (PHAs) identified, it is a natural thermoplastic polyester produced by microbial fermentation obtained from microorganisms. The proposed objective is to develop electrospun bacterial PHB-based microfibers containing curcumin for possible biomedical applications. Commercial PHB was solved in Chloroform: Dimethylformamide (4:1) to a final concentration of 7% m/V. Curcumin was added to the polymeric solution at 1%, and 7% m/m regarding PHB. The electrospinning equipment (NEU-BM, China) with a rotary collector was used to obtain Cur-PHB fibers at different voltages and flow rate of the polymeric solution considering a distance of 20 cm from the needle to the collector. Scanning electron microscopy (SEM) was used to determine the diameter and morphology of the obtained fibers. Thermal stability was obtained from Thermogravimetric (TGA) analysis, and Fourier Transform Infrared Spectroscopy (FT-IR) was carried out in order to study the chemical bonds and interactions. A preliminary curcumin release to Phosphate Buffer Saline (PBS) pH = 7.4 was obtained in vitro and measured by spectrophotometry. PHB fibers presented an intact chemical composition regarding the original condition (dust) according to FTIR spectra, the diameter fluctuates between 0.761 ± 0.123 and 2.157 ± 0.882 μm, with different qualities according to their morphology. The best fibers in terms of quality and diameter resulted in sample 2 and sample 6, obtained at 0-10kV and 0.5 mL/hr, and 0-10kV and 1.5 mL/hr, respectively. The melting temperature resulted near 178 °C, according to the bibliography. The crystallinity of fibers decreases while curcumin concentration increases for the studied interval. The curcumin release reaches near 14% at 37 °C at 54h in PBS adjusted to a quasi-Fickian Diffusion. We conclude that it is possible to load curcumin in PHB to obtain continuous, homogeneous, and solvent-free microfibers by electrospinning. Between 0% and 7% of curcumin, the crystallinity of fibers decreases as the concentration of curcumin increases. Thus, curcumin enhances the flexibility of the obtained material. HPLC should be used in further analysis of curcumin release.Keywords: antioxidant, curcumin, polyhydroxybutyrate, wound healing
Procedia PDF Downloads 1312124 Identification of Nutrient Sensitive Signaling Pathways via Analysis of O-GlcNAcylation
Authors: Michael P. Mannino, Gerald W. Hart
Abstract:
The majority of glucose metabolism proceeds through glycolytic pathways such as glycolysis or pentose phosphate pathway, however, about 5% is shunted through the hexosamine biosynthetic pathway, producing uridine diphosphate N-acetyl glucosamine (UDP-GlcNAc). This precursor can then be incorporated into complex oligosaccharides decorating the cell surface or remain as an intracellular post-translational-modification (PTM) of serine/threonine residues (O-GlcNAcylation, OGN), which has been identified on over 4,000 cytosolic or nuclear proteins. Intracellular OGN has major implications on cellularprocesses, typically by modulating protein localization, protein-protein interactions, protein degradation, and gene expression. Additionally, OGN is known to have an extensive cross-talk with phosphorylation, be in a competitive or cooperative manner. Unlike other PTMs there are only two cycling enzymes that are capable of adding or removing the GlcNAc moiety, O-linked N-aceytl glucosamine Transferase (OGT) and O-linked N-acetyl glucoamidase (OGA), respectively. The activity of OGT has been shown to be sensitive to cellular UDP-GlcNAc levels, even changing substrate affinity. Owing to this and that the concentration of UDP-GlcNAc is related to the metabolisms of glucose, amino acid, fatty acid, and nucleotides, O-GlcNAc is often referred to as a nutrient sensing rheostat. Indeed OGN is known to regulate several signaling pathways as a result of nutrient levels, such as insulin signaling. Dysregulation of OGN is associated with several disease states such as cancer, diabetes, and neurodegeneration. Improvements in glycomics over the past 10-15 years has significantly increased the OGT substrate pool, suggesting O-GlcNAc’s involvement in a wide variety of signaling pathways. However, O-GlcNAc’s role at the receptor level has only been identified in a case-by-case basis of known pathways. Examining the OGN of the plasma membrane (PM) may better focus our understanding of O-GlcNAc-effected signaling pathways. In this current study, PM fractions were isolated from several cell types via ultracentrifugation, followed by purification and MS/MS analysis in several cell lines. This process was repeated with or without OGT/OGA inhibitors or with increased/decreased glucose levels in media to ascertain the importance of OGN. Various pathways are followed up on in more detailed studies employing methods to localize OGN at the PM specifically.Keywords: GlcNAc, nutrient sensitive, post-translational-modification, receptor
Procedia PDF Downloads 1122123 Apollo Quality Program: The Essential Framework for Implementing Patient Safety
Authors: Anupam Sibal
Abstract:
Apollo Quality Program(AQP) was launched across the Apollo Group of Hospitals to address the four patient safety areas; Safety during Clinical Handovers, Medication Safety, Surgical Safety and the six International Patient Safety Goals(IPSGs) of JCI. A measurable, online, quality dashboard covering 20 process and outcome parameters was devised for monthly monitoring. The expected outcomes were also defined and categorized into green, yellow and red ranges. An audit methodology was also devised to check the processes for the measurable dashboard. Documented clinical handovers were introduced for the first time at many locations for in-house patient transfer, nursing-handover, and physician-handover. Prototype forms using the SBAR format were made. Patient-identifiers, read-back for verbal orders, safety of high-alert medications, site marking and time-outs and falls risk-assessment were introduced for all hospitals irrespective of accreditation status. Measurement of Surgical-Site-Infection (SSI) for 30 days postoperatively, was done. All hospitals now tracked the time of administration of antimicrobial prophylaxis before surgery. Situations with high risk of retention of foreign body were delineated and precautionary measures instituted. Audit of medications prescribed in the discharge summaries was made uniform. Formularies, prescription-audits and other means for reduction of medication errors were implemented. There is a marked increase in the compliance to processes and patient safety outcomes. Compliance to read-back for verbal orders rose from 86.83% in April’11 to 96.95% in June’15, to policy for high alert medications from 87.83% to 98.82%, to use of measures to prevent wrong-site, wrong-patient, wrong procedure surgery from 85.75% to 97.66%, to hand-washing from 69.18% to 92.54%, to antimicrobial prophylaxis within one hour before incision from 79.43% to 93.46%. Percentage of patients excluded from SSI calculation due to lack of follow-up for the requisite time frame decreased from 21.25% to 10.25%. The average AQP scores for all Apollo Hospitals improved from 62 in April’11 to 87.7 in Jun’15.Keywords: clinical handovers, international patient safety goals, medication safety, surgical safety
Procedia PDF Downloads 2572122 Web Map Service for Fragmentary Rockfall Inventory
Authors: M. Amparo Nunez-Andres, Nieves Lantada
Abstract:
One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.Keywords: geological risk, web mapping, WMS, rockfalls
Procedia PDF Downloads 1602121 The Effect of Annual Weather and Sowing Date on Different Genotype of Maize (Zea mays L.) in Germination and Yield
Authors: Ákos Tótin
Abstract:
In crop production the most modern hybrids are available for us, therefore the yield and yield stability is determined by the agro-technology. The purpose of the experiment is to adapt the modern agrotechnology to the new type of hybrids. The long-term experiment was set up in 2015-2016 on chernozem soil in the Hajdúság (eastern Hungary). The plots were set up in 75 thousand ha-1 plant density. We examined some mainly use hybrids of Hungary. The conducted studies are: germination dynamic, growing dynamic and the effect of annual weather for the yield. We use three different sowing date as early, average and late, and measure how many plant germinated during the germination process. In the experiment, we observed the germination dynamics in 6 hybrid in 4 replication. In each replication, we counted the germinated plants in 2m long 2 row wide area. Data will be shown in the average of the 6 hybrid and 4 replication. Growing dynamics were measured from the 10cm (4-6 leaf) plant highness. We measured 10 plants’ height in two weeks replication. The yield was measured buy a special plot harvester - the Sampo Rosenlew 2010 – what measured the weight of the harvested plot and also took a sample from it. We determined the water content of the samples for the water release dynamics. After it, we calculated the yield (t/ha) of each plot at 14% of moisture content to compare them. We evaluated the data using Microsoft Excel 2015. The annual weather in each crop year define the maize germination dynamics because the amount of heat is determinative for the plants. In cooler crop year the weather is prolonged the germination. At the 2015 crop year the weather was cold in the beginning what prolonged the first sowing germination. But the second and third sowing germinated faster. In the 2016 crop year the weather was much favorable for plants so the first sowing germinated faster than in the previous year. After it the weather cooled down, therefore the second and third sowing germinated slower than the last year. The statistical data analysis program determined that there is a significant difference between the early and late sowing date growing dynamics. In 2015 the first sowing date had the highest amount of yield. The second biggest yield was in the average sowing time. The late sowing date has lowest amount of yield.Keywords: germination, maize, sowing date, yield
Procedia PDF Downloads 2312120 The Volume–Volatility Relationship Conditional to Market Efficiency
Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese
Abstract:
The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent
Procedia PDF Downloads 782119 Teachers’ Education in Brazil: A Case Study on Students’ Performance
Authors: Priscila A. M. Rodrigues
Abstract:
In Brazil, higher education is usually offered in three parts of the day: in the morning, afternoon and evening. Students have to decide what part of the day they are going to study in the application process. Most of the admitted students who choose to study in the evening also work during the day, because of their financial conditions. Brazilian higher education courses in the evening were initially created to meet the demand for teacher training. These teacher-training courses are socially discredited and considered easily accessible in the country, mostly due to the fact that students who enroll for those courses come from very poor basic education. The research has analyzed the differences between the social profiles and studying conditions of students of teacher education, especially the training intended for would-be elementary education teachers. An investigation has been conducted with these undergraduate students, who were divided into a group of those who study both in the morning and in the afternoon (group 1) and a group of those who study in the evening (group 2). The hypothesis predicted that students in group 1 would perform better than students in group 2. The analysis of training and studying conditions departed from the point of view of students and their teachers. The hypothesis predicted that students in group 1 would perform better than students in group 2. The analysis of training and studying conditions departed from the point of view of students and their teachers. Data was collected from survey, qualitative interviews, field observation and reports from students. Sociological concepts of habitus, cultural capital, trajectories and strategies are essential for this study as well as the literature on quality of higher education. The research revealed that there are differences of studying conditions between group 1 and group 2, precisely when it comes to the university atmosphere, that is to say, academic support resources and enrichment activities which promote educational, cultural and social opportunities, for example conferences, events, scholarships of different types, etc. In order to counteract the effects of their poor educational performance, students who generally come from popular strata require conditions of greater dedication and investment in higher education, which most of them do not have. Despite the considerable difficulties that students in group 2 encounter in their academic experience, the university experience per se brings a gain for the lives of these students, which translates into the expansion of their capital structure – i.e. symbolic, cultural and educational capital – with repercussions on their social trajectory, especially in professional conditions.Keywords: higher education, higher education students’ performance, quality of higher education, teacher’s education
Procedia PDF Downloads 2772118 Encoding the Design of the Memorial Park and the Family Network as the Icon of 9/11 in Amy Waldman's the Submission
Authors: Masami Usui
Abstract:
After 9/11, the American literary scene was confronted with new perspectives that enabled both writers and readers to recognize the hidden aspects of their political, economic, legal, social, and cultural phenomena. There appeared an argument over new and challenging multicultural aspects after 9/11 and this argument is presented by a tension of space related to 9/11. In Amy Waldman’s the Submission (2011), designing both the memorial park and the family network has a significant meaning in establishing the progress of understanding from multiple perspectives. The most intriguing and controversial topic of racism is reflected in the Submission, where one young architect’s blind entry to the competition for the memorial of Ground Zero is nominated, yet he is confronted with strong objections and hostility as soon as he turns out to be a Muslim named Mohammad Khan. This ‘Khan’ issue, immediately enlarged into a social controversial issue on American soil, causes repeated acts of hostility to Muslim women by ignorant citizens all over America. His idea of the park is to design a new concept of tracing the cultural background of the open space. Against his will, his name is identified as the ‘ingredient’ of the networking of the resistant community with his supporters: on the other hand, the post 9/11 hysteria and victimization is presented in such family associations as the Angry Family Members and Grieving Family Members. These rapidly expanding networks, whether political or not, constructed by the internet, embody the contemporary societal connection and representation. The contemporary quest for the significance of human relationships is recognized as a quest for global peace. Designing both the memorial park and the communication networks strengthens a process of facing the shared conflicts and healing the survivors’ trauma. The tension between the idea and networking of the Garden for the memorial site and the collapse of Ground Zero signifies the double mission of the site: to establish the space to ease the wounded and to remember the catastrophe. Reading the design of these icons of 9/11 in the Submission means that decoding the myth of globalization and its representations in this century.Keywords: American literature, cultural studies, globalization, literature of catastrophe
Procedia PDF Downloads 5332117 Development of an Automatic Calibration Framework for Hydrologic Modelling Using Approximate Bayesian Computation
Authors: A. Chowdhury, P. Egodawatta, J. M. McGree, A. Goonetilleke
Abstract:
Hydrologic models are increasingly used as tools to predict stormwater quantity and quality from urban catchments. However, due to a range of practical issues, most models produce gross errors in simulating complex hydraulic and hydrologic systems. Difficulty in finding a robust approach for model calibration is one of the main issues. Though automatic calibration techniques are available, they are rarely used in common commercial hydraulic and hydrologic modelling software e.g. MIKE URBAN. This is partly due to the need for a large number of parameters and large datasets in the calibration process. To overcome this practical issue, a framework for automatic calibration of a hydrologic model was developed in R platform and presented in this paper. The model was developed based on the time-area conceptualization. Four calibration parameters, including initial loss, reduction factor, time of concentration and time-lag were considered as the primary set of parameters. Using these parameters, automatic calibration was performed using Approximate Bayesian Computation (ABC). ABC is a simulation-based technique for performing Bayesian inference when the likelihood is intractable or computationally expensive to compute. To test the performance and usefulness, the technique was used to simulate three small catchments in Gold Coast. For comparison, simulation outcomes from the same three catchments using commercial modelling software, MIKE URBAN were used. The graphical comparison shows strong agreement of MIKE URBAN result within the upper and lower 95% credible intervals of posterior predictions as obtained via ABC. Statistical validation for posterior predictions of runoff result using coefficient of determination (CD), root mean square error (RMSE) and maximum error (ME) was found reasonable for three study catchments. The main benefit of using ABC over MIKE URBAN is that ABC provides a posterior distribution for runoff flow prediction, and therefore associated uncertainty in predictions can be obtained. In contrast, MIKE URBAN just provides a point estimate. Based on the results of the analysis, it appears as though ABC the developed framework performs well for automatic calibration.Keywords: automatic calibration framework, approximate bayesian computation, hydrologic and hydraulic modelling, MIKE URBAN software, R platform
Procedia PDF Downloads 3092116 Evaluation of Arsenic Removal in Soils Contaminated by the Phytoremediation Technique
Authors: V. Ibujes, A. Guevara, P. Barreto
Abstract:
Concentration of arsenic represents a serious threat to human health. It is a bioaccumulable toxic element and is transferred through the food chain. In Ecuador, values of 0.0423 mg/kg As are registered in potatoes of the skirts of the Tungurahua volcano. The increase of arsenic contamination in Ecuador is mainly due to mining activity, since the process of gold extraction generates toxic tailings with mercury. In the Province of Azuay, due to the mining activity, the soil reaches concentrations of 2,500 to 6,420 mg/kg As whereas in the province of Tungurahua it can be found arsenic concentrations of 6.9 to 198.7 mg/kg due to volcanic eruptions. Since the contamination by arsenic, the present investigation is directed to the remediation of the soils in the provinces of Azuay and Tungurahua by phytoremediation technique and the definition of a methodology of extraction by means of analysis of arsenic in the system soil-plant. The methodology consists in selection of two types of plants that have the best arsenic removal capacity in synthetic solutions 60 μM As, a lower percentage of mortality and hydroponics resistance. The arsenic concentrations in each plant were obtained from taking 10 ml aliquots and the subsequent analysis of the ICP-OES (inductively coupled plasma-optical emission spectrometry) equipment. Soils were contaminated with synthetic solutions of arsenic with the capillarity method to achieve arsenic concentration of 13 and 15 mg/kg. Subsequently, two types of plants were evaluated to reduce the concentration of arsenic in soils for 7 weeks. The global variance for soil types was obtained with the InfoStat program. To measure the changes in arsenic concentration in the soil-plant system, the Rhizo and Wenzel arsenic extraction methodology was used and subsequently analyzed with the ICP-OES (optima 8000 Pekin Elmer). As a result, the selected plants were bluegrass and llanten, due to the high percentages of arsenic removal of 55% and 67% and low mortality rates of 9% and 8% respectively. In conclusion, Azuay soil with an initial concentration of 13 mg/kg As reached the concentrations of 11.49 and 11.04 mg/kg As for bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.79 and 11.10 mg/kg As for blue grass and llanten after 7 weeks. For the Tungurahua soil with an initial concentration of 13 mg/kg As it reached the concentrations of 11.56 and 12.16 mg/kg As for the bluegrass and llanten respectively, and for the initial concentration of 15 mg/kg As reached 11.97 and 12.27 mg/kg Ace for bluegrass and llanten after 7 weeks. The best arsenic extraction methodology of soil-plant system is Wenzel.Keywords: blue grass, llanten, phytoremediation, soil of Azuay, soil of Tungurahua, synthetic arsenic solution
Procedia PDF Downloads 1032115 Pediatric Hearing Aid Use: A Study Based on Data Logging Information
Authors: Mina Salamatmanesh, Elizabeth Fitzpatrick, Tim Ramsay, Josee Lagacé, Lindsey Sikora, JoAnne Whittingham
Abstract:
Introduction: Hearing loss (HL) is one of the most common disorders that presents at birth and in early childhood. Universal newborn hearing screening (UNHS) has been adopted based on the assumption that with early identification of HL, children will have access to optimal amplification and intervention at younger ages, therefore, taking advantage of the brain’s maximal plasticity. One particular challenge for parents in the early years is achieving consistent hearing aid (HA) use which is critical to the child’s development and constitutes the first step in the rehabilitation process. This study examined the consistency of hearing aid use in young children based on data logging information documented during audiology sessions in the first three years after hearing aid fitting. Methodology: The first 100 children who were diagnosed with bilateral HL before 72 months of age since 2003 to 2015 in a pediatric audiology clinic and who had at least two hearing aid follow-up sessions with available data logging information were included in the study. Data from each audiology session (age of child at the session, average hours of use per day (for each ear) in the first three years after HA fitting) were collected. Clinical characteristics (degree of hearing loss, age of HA fitting) were also documented to further understanding of factors that impact HA use. Results: Preliminary analysis of the results of the first 20 children shows that all of them (100%) have at least one data logging session recorded in the clinical audiology system (Noah). Of the 20 children, 17(85%) have three data logging events recorded in the first three years after HA fitting. Based on the statistical analysis of the first 20 cases, the median hours of use in the first follow-up session after the hearing aid fitting in the right ear is 3.9 hours with an interquartile range (IQR) of 10.2h. For the left ear the median is 4.4 and the IQR is 9.7h. In the first session 47% of the children use their hearing aids ≤5 hours, 12% use them between 5 to 10 hours and 22% use them ≥10 hours a day. However, these children showed increased use by the third follow-up session with a median (IQR) of 9.1 hours for the right ear and 2.5, and of 8.2 hours for left ear (IQR) IQR is 5.6 By the third follow-up session, 14% of children used hearing aids ≤5 hours, while 38% of children used them ≥10 hours. Based on the primary results, factors like age and level of HL significantly impact the hours of use. Conclusion: The use of data logging information to assess the actual hours of HA provides an opportunity to examine the: a) challenges of families of young children with HAs, b) factors that impact use in very young children. Data logging when used collaboratively with parents, can be a powerful tool to identify problems and to encourage and assist families in maximizing their child’s hearing potential.Keywords: hearing loss, hearing aid, data logging, hours of use
Procedia PDF Downloads 2302114 Nanobiosensor System for Aptamer Based Pathogen Detection in Environmental Waters
Authors: Nimet Yildirim Tirgil, Ahmed Busnaina, April Z. Gu
Abstract:
Environmental waters are monitored worldwide to protect people from infectious diseases primarily caused by enteric pathogens. All long, Escherichia coli (E. coli) is a good indicator for potential enteric pathogens in waters. Thus, a rapid and simple detection method for E. coli is very important to predict the pathogen contamination. In this study, to the best of our knowledge, as the first time we developed a rapid, direct and reusable SWCNTs (single walled carbon nanotubes) based biosensor system for sensitive and selective E. coli detection in water samples. We use a novel and newly developed flexible biosensor device which was fabricated by high-rate nanoscale offset printing process using directed assembly and transfer of SWCNTs. By simple directed assembly and non-covalent functionalization, aptamer (biorecognition element that specifically distinguish the E. coli O157:H7 strain from other pathogens) based SWCNTs biosensor system was designed and was further evaluated for environmental applications with simple and cost-effective steps. The two gold electrode terminals and SWCNTs-bridge between them allow continuous resistance response monitoring for the E. coli detection. The detection procedure is based on competitive mode detection. A known concentration of aptamer and E. coli cells were mixed and after a certain time filtered. The rest of free aptamers injected to the system. With hybridization of the free aptamers and their SWCNTs surface immobilized probe DNA (complementary-DNA for E. coli aptamer), we can monitor the resistance difference which is proportional to the amount of the E. coli. Thus, we can detect the E. coli without injecting it directly onto the sensing surface, and we could protect the electrode surface from the aggregation of target bacteria or other pollutants that may come from real wastewater samples. After optimization experiments, the linear detection range was determined from 2 cfu/ml to 10⁵ cfu/ml with higher than 0.98 R² value. The system was regenerated successfully with 5 % SDS solution over 100 times without any significant deterioration of the sensor performance. The developed system had high specificity towards E. coli (less than 20 % signal with other pathogens), and it could be applied to real water samples with 86 to 101 % recovery and 3 to 18 % cv values (n=3).Keywords: aptamer, E. coli, environmental detection, nanobiosensor, SWCTs
Procedia PDF Downloads 1972113 The Effectiveness of Virtual Reality Training for Improving Interpersonal Communication Skills: An Experimental Study
Authors: Twinkle Sara Joseph
Abstract:
Virtual reality technology has emerged as a revolutionary power that can transform the education sector in many ways. VR environments can break the boundaries of the traditional classroom setting by immersing the students in realistic 3D environments where they can interact with virtual characters without fearing being judged. Communication skills are essential for every profession, and studies suggest the importance of implementing basic-level communication courses at both the school and graduate levels. Interpersonal communication is a skill that gains prominence as it is required in every profession. Traditional means of training have limitations for trainees as well as participants. The fear of being judged, the audience interaction, and other factors can affect the performance of a participant in a traditional classroom setting. Virtual reality offers a unique opportunity for its users to participate in training that does not set any boundaries that prevent the participants from performing in front of an audience. Specialised applications designed in VR headsets offer a range of training and exercises for participants without any time, space, or audience limitations. The present study aims at measuring the effectiveness of VR training in improving interpersonal communication skills among students. The study uses a mixed-method approach, in which a pre-and post-test will be designed to measure effectiveness. A preliminary selection process involving a questionnaire and a screening test will identify suitable candidates based on their current communication proficiency levels. Participants will undergo specialised training through the VR application Virtual Speech tailored for interpersonal communication and public speaking, designed to operate without the traditional constraints of time, space, or audience. The training's impact will subsequently be measured through situational exercises to engage the participants in interpersonal communication tasks, thereby assessing the improvement in their skills. The significance of this study lies in its potential to provide empirical evidence supporting VR technology's role in enhancing communication skills, thereby offering valuable insights for integrating VR-based methodologies into educational frameworks to prepare students more effectively for their professional futures.Keywords: virtual reality, VR training, interpersonal communication, communication skills, 3D environments
Procedia PDF Downloads 532112 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading
Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein
Abstract:
Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound
Procedia PDF Downloads 1922111 Bio-Remediation of Lead-Contaminated Water Using Adsorbent Derived from Papaya Peel
Authors: Sahar Abbaszadeh, Sharifah Rafidah Wan Alwi, Colin Webb, Nahid Ghasemi, Ida Idayu Muhamad
Abstract:
Toxic heavy metal discharges into environment due to rapid industrialization is a serious pollution problem that has drawn global attention towards their adverse impacts on both the structure of ecological systems as well as human health. Lead as toxic and bio-accumulating elements through the food chain, is regularly entering to water bodies from discharges of industries such as plating, mining activities, battery manufacture, paint manufacture, etc. The application of conventional methods to degrease and remove Pb(II) ion from wastewater is often restricted due to technical and economic constrains. Therefore, the use of various agro-wastes as low-cost bioadsorbent is found to be attractive since they are abundantly available and cheap. In this study, activated carbon of papaya peel (AC-PP) (as locally available agricultural waste) was employed to evaluate its Pb(II) uptake capacity from single-solute solutions in sets of batch mode experiments. To assess the surface characteristics of the adsorbents, the scanning electron microscope (SEM) coupled with energy disperse X-ray (EDX), and Fourier transform infrared spectroscopy (FT-IR) analysis were utilized. The removal amount of Pb(II) was determined by atomic adsorption spectrometry (AAS). The effects of pH, contact time, the initial concentration of Pb(II) and adsorbent dosage were investigated. The pH value = 5 was observed as optimum solution pH. The optimum initial concentration of Pb(II) in the solution for AC-PP was found to be 200 mg/l where the amount of Pb(II) removed was 36.42 mg/g. At the agitating time of 2 h, the adsorption processes using 100 mg dosage of AC-PP reached equilibrium. The experimental results exhibit high capability and metal affinity of modified papaya peel waste with removal efficiency of 93.22 %. The evaluation results show that the equilibrium adsorption of Pb(II) was best expressed by Freundlich isotherm model (R2 > 0.93). The experimental results confirmed that AC-PP potentially can be employed as an alternative adsorbent for Pb(II) uptake from industrial wastewater for the design of an environmentally friendly yet economical wastewater treatment process.Keywords: activated carbon, bioadsorption, lead removal, papaya peel, wastewater treatment
Procedia PDF Downloads 2862110 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 1602109 Embedding Looping Concept into Corporate CSR Strategy for Sustainable Growth: An Exploratory Study
Authors: Vani Tanggamani, Azlan Amran
Abstract:
The issues of Corporate Social Responsibility (CSR) have been extended from developmental economics to corporate and business in recent years. Research in issues related to CSR is deemed to make higher impacts as CSR encourages long-term economy and business success without neglecting social, environmental risks, obligations and opportunities. Therefore, CSR is a key matter for any organisation aiming for long term sustainability since business incorporates principles of social responsibility into each of its business decisions. Thus, this paper presents a theoretical proposition based on stakeholder theory from the organisational perspective as a foundation for better CSR practices. The primary subject of this paper is to explore how looping concept can be effectively embedded into corporate CSR strategy to foster sustainable long term growth. In general, the concept of a loop is a structure or process, the end of which is connected to the beginning, whereas the narrow view of a loop in business field means plan, do, check, and improve. In this sense, looping concept is a blend of balance and agility with the awareness to know when to which. Organisations can introduce similar pull mechanisms by formulating CSR strategies in order to perform the best plan of actions in real time, then a chance to change those actions, pushing them toward well-organized planning and successful performance. Through the analysis of an exploratory study, this paper demonstrates that approaching looping concept in the context of corporate CSR strategy is an important source of new idea to propel CSR practices by deepening basic understanding through the looping concept which is increasingly necessary to attract and retain business stakeholders include people such as employees, customers, suppliers and other communities for long-term business survival. This paper contributes to the literature by providing a fundamental explanation of how the organisations will experience less financial and reputation risk if looping concept logic is integrated into core business CSR strategy.The value of the paper rests in the treatment of looping concept as a corporate CSR strategy which demonstrates "looping concept implementation framework for CSR" that could further foster business sustainability, and help organisations move along the path from laggards to leaders.Keywords: corporate social responsibility, looping concept, stakeholder theory, sustainable growth
Procedia PDF Downloads 4012108 Creative Practice and Consciousness in Juju Music: A Nigerian Musical and Cultural Perspective
Authors: Olupemi E. Oludare
Abstract:
This paper investigates the creative practice engaged in Juju music, a Nigerian Neo-traditional genre of the Yoruba, and its influence on the consciousness of societal praxis. It takes a musical and cultural perspective, as representational indices of how the people’s religious, social, educational, and political consciousness is expressed in their music. The study adopts the historical cum descriptive design in its methodology, tracing the historical development of Juju music, the appropriation of musical and cultural materials in its creative process, and a descriptive analysis of its musical practice, in order to substantiate the role and function of Juju music and its musicians in the political, philosophical, and social consciousness of Nigeria’s pre- and post-independence epoch. Data were collected through oral interviews of selected Juju practitioners, stakeholders, and enthusiasts. It also employed the use of discography of Juju musicians. This paper discusses musical factors such as form, melodic and rhythmic patterns, and thematic materials, while highlighting cultural factors such as linguistic elements, with textual analysis, as a conscious avenue of expression. The study revealed that Juju musicians composed their music by engaging both indigenous and foreign musical materials, as a means of creative practice for musical entertainment, while expressing the people’s consciousness of their beliefs, values, and socio-political issues, hence the music functioning as a vehicle for social commentaries. The popularization and commercialization of Juju music brought the musicians national and international accolades, subsequently attracting contributions from contemporary musicians, which led to innovations of new brands, such as ‘Afro-Juju’, ‘Gospel-Juju’, ‘Hip-Hop-Juju’, etc., albeit retaining the basic musical elements of its progenitor, as a conscious music for socio-cultural functions. This study concludes that Juju music and its musicians remain germane in the musical scene of the nation’s social, educational, and political terrain, especially in the current Nigerian democratic climate. This paper recommends the promotion and patronage of the Juju music in its original form, to prevent its decline in current times, since it serves as an enrichment of national identity both in Nigeria, and Internationally.Keywords: appropriation, consciousness, creative practice, national identity, neo-traditional
Procedia PDF Downloads 4262107 Geometric, Energetic and Topological Analysis of (Ethanol)₉-Water Heterodecamers
Authors: Jennifer Cuellar, Angie L. Parada, Kevin N. S. Chacon, Sol M. Mejia
Abstract:
The purification of bio-ethanol through distillation methods is an unresolved issue at the biofuel industry because of the ethanol-water azeotrope formation, which increases the steps of the purification process and subsequently increases the production costs. Therefore, understanding the mixture nature at the molecular level could provide new insights for improving the current methods and/or designing new and more efficient purification methods. For that reason, the present study focuses on the evaluation and analysis of (ethanol)₉-water heterodecamers, as the systems with the minimum molecular proportion that represents the azeotropic concentration (96 %m/m in ethanol). The computational modelling was carried out with B3LYP-D3/6-311++G(d,p) in Gaussian 09. Initial explorations of the potential energy surface were done through two methods: annealing simulated runs and molecular dynamics trajectories besides intuitive structures obtained from smaller (ethanol)n-water heteroclusters, n = 7, 8 and 9. The energetic order of the seven stable heterodecamers determines the most stable heterodecamer (Hdec-1) as a structure forming a bicyclic geometry with the O-H---O hydrogen bonds (HBs) where the water is a double proton donor molecule. Hdec-1 combines 1 water molecule and the same quantity of every ethanol conformer; this is, 3 trans, 3 gauche 1 and 3 gauche 2; its abundance is 89%, its decamerization energy is -80.4 kcal/mol, i.e. 13 kcal/mol most stable than the less stable heterodecamer. Besides, a way to understand why methanol does not form an azeotropic mixture with water, analogous systems ((ethanol)10, (methanol)10, and (methanol)9-water)) were optimized. Topologic analysis of the electron density reveals that Hec-1 forms 33 weak interactions in total: 11 O-H---O, 8 C-H---O, 2 C-H---C hydrogen bonds and 12 H---H interactions. The strength and abundance of the most unconventional interactions (H---H, C-H---O and C-H---O) seem to explain the preference of the ethanol for forming heteroclusters instead of clusters. Besides, O-H---O HBs present a significant covalent character according to topologic parameters as the Laplacian of electron density and the relationship between potential and kinetic energy densities evaluated at the bond critical points; obtaining negatives values and values between 1 and 2, for those two topological parameters, respectively.Keywords: ADMP, DFT, ethanol-water azeotrope, Grimme dispersion correction, simulated annealing, weak interactions
Procedia PDF Downloads 1032106 Electrospun Fibre Networks Loaded with Hydroxyapatite and Barium Titanate as Smart Scaffolds for Tissue Regeneration
Authors: C. Busuioc, I. Stancu, A. Nicoara, A. Zamfirescu, A. Evanghelidis
Abstract:
The field of tissue engineering has expanded its potential due to the use of composite biomaterials belonging to increasingly complex systems, leading to bone substitutes with properties that are continuously improving to meet the patient's specific needs. Furthermore, the development of biomaterials based on ceramic and polymeric phases is an unlimited resource for future scientific research, with the final aim of restoring the original tissue functionality. Thus, in the first stage, composite scaffolds based on polycaprolactone (PCL) or polylactic acid (PLA) and inorganic powders were prepared by employing the electrospinning technique. The targeted powders were: commercial and laboratory synthesized hydroxyapatite (HAp), as well as barium titanate (BT). By controlling the concentration of the powder within the precursor solution, together with the processing parameters, different types of three-dimensional architectures were achieved. In the second stage, both the mineral powders and hybrid composites were investigated in terms of composition, crystalline structure, and microstructure so that to demonstrate their suitability for tissue engineering applications. Regarding the scaffolds, these were proven to be homogeneous on large areas and loaded with mineral particles in different proportions. The biological assays demonstrated that the addition of inorganic powders leads to modified responses in the presence of simulated body fluid (SBF) or cell cultures. Through SBF immersion, the biodegradability coupled with bioactivity were highlighted, with fiber fragmentation and surface degradation, as well as apatite layer formation within the testing period. Moreover, the final composites represent supports accepted by the cells, favoring implant integration. Concluding, the purposed fibrous materials based on bioresorbable polymers and mineral powders, produced by the electrospinning technique, represent candidates with considerable potential in the field of tissue engineering. Future improvements can be attained by optimizing the synthesis process or by simultaneous incorporation of multiple inorganic phases with well-defined biological action in order to fabricate multifunctional composites.Keywords: barium titanate, electrospinning, fibre networks, hydroxyapatite, smart scaffolds
Procedia PDF Downloads 1112105 Synthesis of High-Pressure Performance Adsorbent from Coconut Shells Polyetheretherketone for Methane Adsorption
Authors: Umar Hayatu Sidik
Abstract:
Application of liquid base petroleum fuel (petrol and diesel) for transportation fuel causes emissions of greenhouse gases (GHGs), while natural gas (NG) reduces the emissions of greenhouse gases (GHGs). At present, compression and liquefaction are the most matured technology used for transportation system. For transportation use, compression requires high pressure (200–300 bar) while liquefaction is impractical. A relatively low pressure of 30-40 bar is achievable by adsorbed natural gas (ANG) to store nearly compressed natural gas (CNG). In this study, adsorbents for high-pressure adsorption of methane (CH4) was prepared from coconut shells and polyetheretherketone (PEEK) using potassium hydroxide (KOH) and microwave-assisted activation. Design expert software version 7.1.6 was used for optimization and prediction of preparation conditions of the adsorbents for CH₄ adsorption. Effects of microwave power, activation time and quantity of PEEK on the adsorbents performance toward CH₄ adsorption was investigated. The adsorbents were characterized by Fourier transform infrared spectroscopy (FTIR), thermogravimetric (TG) and derivative thermogravimetric (DTG) and scanning electron microscopy (SEM). The ideal CH4 adsorption capacities of adsorbents were determined using volumetric method at pressures of 5, 17, and 35 bar at an ambient temperature and 5 oC respectively. Isotherm and kinetics models were used to validate the experimental results. The optimum preparation conditions were found to be 15 wt% amount of PEEK, 3 minutes activation time and 300 W microwave power. The highest CH4 uptake of 9.7045 mmol CH4 adsorbed/g adsorbent was recorded by M33P15 (300 W of microwave power, 3 min activation time and 15 wt% amount of PEEK) among the sorbents at an ambient temperature and 35 bar. The CH4 equilibrium data is well correlated with Sips, Toth, Freundlich and Langmuir. Isotherms revealed that the Sips isotherm has the best fit, while the kinetics studies revealed that the pseudo-second-order kinetic model best describes the adsorption process. In all scenarios studied, a decrease in temperature led to an increase in adsorption of both gases. The adsorbent (M33P15) maintained its stability even after seven adsorption/desorption cycles. The findings revealed the potential of coconut shell-PEEK as CH₄ adsorbents.Keywords: adsorption, desorption, activated carbon, coconut shells, polyetheretherketone
Procedia PDF Downloads 672104 Identifying the Hidden Curriculum Components in the Nursing Education
Authors: Alice Khachian, Shoaleh Bigdeli, Azita Shoghie, Leili Borimnejad
Abstract:
Background and aim: The hidden curriculum is crucial in nursing education and can determine professionalism and professional competence. It has a significant effect on their moral performance in relation to patients. The present study was conducted with the aim of identifying the hidden curriculum components in the nursing and midwifery faculty. Methodology: The ethnographic study was conducted over two years using the Spradley method in one of the nursing schools located in Tehran. In this focused ethnographic research, the approach of Lincoln and Goba, i.e., transferability, confirmability, and dependability, was used. To increase the validity of the data, they were collected from different sources, such as participatory observation, formal and informal interviews, and document review. Two hundred days of participatory observation, fifty informal interviews, and fifteen formal interviews from the maximum opportunities and conditions available to obtain multiple and multilateral information added to the validity of the data. Due to the situation of COVID, some interviews were conducted virtually, and the activity of professors and students in the virtual space was also monitored. Findings: The components of the hidden curriculum of the faculty are: the atmosphere (physical environment, organizational structure, rules and regulations, hospital environment), the interaction between activists, and teaching-learning activities, which ultimately lead to “A disconnection between goals, speech, behavior, and result” had revealed. Conclusion: The mutual effects of the atmosphere and various actors and activities on the process of student development, since the students have the most contact with their peers first, which leads to the most learning, and secondly with the teachers. Clinicians who have close and person-to-person contact with students can have very important effects on students. Students who meet capable and satisfied professors on their way become interested in their field and hope for their future by following the mentor of these professors. On the other hand, weak and dissatisfied professors lead students to feel abandoned, and by forming a colony of peers with different backgrounds, they distort the personality of a group of students and move away from family values, which necessitates a change in some cultural practices at the faculty level.Keywords: hidden curriculum, nursing education, ethnography, nursing
Procedia PDF Downloads 1092103 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 2032102 Application of a Theoretical framework as a Context for a Travel Behavior Change Policy Intervention
Authors: F. Moghtaderi, M. Burke, J. Troelsen
Abstract:
There has been a significant decline in active travel as well as the massive increase use of car-dependent travel mode in many countries during past two decades. Evidential risks for people’s physical and mental health problems are followed by this increased use of motorized travel mode. These problems range from overweight and obesity to increasing air pollution. In response to these rising concerns, local councils and other interested organizations around the world have introduced a variety of initiatives regarding reduce the dominance of cars for the daily journeys. However, the nature of these kinds of interventions, which related to the human behavior, make lots of complexities. People’s travel behavior and changing this behavior, has two different aspects. People’s attitudes and perceptions toward the sustainable and healthy modes of travel, and motorized travel modes (especially private car use) is one these two aspects. The other one related to people’s behavior change processes. There are no comprehensive model in order to guide policy interventions to increase the level of succeed of such interventions. A comprehensive theoretical framework is required in accordance to facilitate and guide the processes of data collection and analysis to achieve the best possible guidelines for policy makers. Regarding this gaps in the travel behavior change research, this paper attempted to identify and suggest a multidimensional framework in order to facilitate planning interventions. A structured mixed-method is suggested regarding the expand the scope and improve the analytic power of the result according to the complexity of human behavior. In order to recognize people’s attitudes, a theory with the focus on people’s attitudes towards a particular travel behavior was needed. The literature around the theory of planned behavior (TPB) was the most useful, and had been proven to be a good predictor of behavior change. Another aspect of the research, related to the people’s decision-making process regarding explore guidelines for the further interventions. Therefore, a theory was needed to facilitate and direct the interventions’ design. The concept of the transtheoretical model of behavior change (TTM) was used regarding reach a set of useful guidelines for the further interventions with the aim to increase active travel and sustainable modes of travel. Consequently, a combination of these two theories (TTM and TPB) had presented as an appropriate concept to identify and design implemented travel behavior change interventions.Keywords: behavior change theories, theoretical framework, travel behavior change interventions, urban research
Procedia PDF Downloads 373