Search results for: revolution per minute
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 826

Search results for: revolution per minute

136 [Keynote Talk]: The Intoxicated Eyewitness: Effect of Alcohol Consumption on Identification Accuracy in Lineup

Authors: Vikas S. Minchekar

Abstract:

The eyewitness is a crucial source of evidence in the criminal judicial system. However, rely on the reminiscence of an eyewitness especially intoxicated eyewitness is not always judicious. It might lead to some serious consequences. Day by day, alcohol-related crimes or the criminal incidences in bars, nightclubs, and restaurants are increasing rapidly. Tackling such cases is very complicated to any investigation officers. The people in that incidents are violated due to the alcohol consumption hence, their ability to identify the suspects or recall these phenomena is affected. The studies on the effects of alcohol consumption on motor activities such as driving and surgeries have received much attention. However, the effect of alcohol intoxication on memory has received little attention from the psychology, law, forensic and criminology scholars across the world. In the Indian context, the published articles on this issue are equal to none up to present day. This field experiment investigation aimed at to finding out the effect of alcohol consumption on identification accuracy in lineups. Forty adult, social drinkers, and twenty sober adults were randomly recruited for the study. The sober adults were assigned into 'placebo' beverage group while social drinkers were divided into two group e. g. 'low dose' of alcohol (0.2 g/kg) and 'high dose' of alcohol (0.8 g/kg). The social drinkers were divided in such a way that their level of blood-alcohol concentration (BAC) will become different. After administering the beverages for the placebo group and liquor to the social drinkers for 40 to 50 minutes of the period, the five-minute video clip of mock crime is shown to all in a group of four to five members. After the exposure of video, clip subjects were given 10 portraits and asked them to recognize whether they are involved in mock crime or not. Moreover, they were also asked to describe the incident. The subjects were given two opportunities to recognize the portraits and to describe the events; the first opportunity is given immediately after the video clip and the second was 24 hours later. The obtained data were analyzed by one-way ANOVA and Scheffe’s posthoc multiple comparison tests. The results indicated that the 'high dose' group is remarkably different from the 'placebo' and 'low dose' groups. But, the 'placebo' and 'low dose' groups are equally performed. The subjects in a 'high dose' group recognized only 20% faces correctly while the subjects in a 'placebo' and 'low dose' groups are recognized 90 %. This study implied that the intoxicated witnesses are less accurate to recognize the suspects and also less capable of describing the incidents where crime has taken place. Moreover, this study does not assert that intoxicated eyewitness is generally less trustworthy than their sober counterparts.

Keywords: intoxicated eyewitness, memory, social drinkers, lineups

Procedia PDF Downloads 258
135 High Acid-Stable α-Amylase Production by Milk in Liquid Culture

Authors: Shohei Matsuo, Saki Mikai, Hiroshi Morita

Abstract:

Objectives: Shochu is a popular Japanese distilled spirits. In the production of shochu, the filamentous fungus Aspergillus kawachii has traditionally been used. A. kawachii produces two types of starch hydrolytic enzymes, α-amylase (enzymatic liquefaction) and glucoamylase (enzymatic saccharification). Liquid culture system is a relatively easy microorganism to ferment with relatively low cost of production compared for solid culture. In liquid culture system, acid-unstable α-amylase (α-A) was produced abundantly, but, acid-stable α-amylase (Aα-A) was not produced. Since there is high enzyme productivity, most in shochu brewing have been adopted by a solid culture method. In this study, therefore, we investigated production of Aα-A in liquid culture system. Materials and methods: Microorganism Aspergillus kawachii NBRC 4308 was used. The mold was cultured at 30 °C for 7~14 d to allow formation of conidiospores on slant agar medium. Liquid Culture System: A. kawachii was cultured in a 100 ml of following altered SLS medium: 1.0 g of rice flour, 0.1 g of K2HPO4, 0.1 g of KCl, 0.6 g of tryptone, 0.05 g of MgSO4・7H2O, 0.001 g of FeSO4・7H2O, 0.0003 g of ZnSO4・7H2O, 0.021 g of CaCl2, 0.33 of citric acid (pH 3.0). The pH of the medium was adjusted to the designated value with 10 % HCl solution. The cultivation was shaking at 30 °C and 200 rpm for 72 h. It was filtered to obtain a crude enzyme solution. Aα-A assay: The crude enzyme solution was analyzed. An acid-stable α-amylase activity was carried out using an α-amylase assay kit (Kikkoman Corporation, Noda, Japan). It was conducted after adding 9 ml of 100 mM acetate buffer (pH 3.0) to 1 ml of the culture product supernatant and acid treatment at 37°C for 1 h. One unit of a-amylase activity was defined as the amount of enzyme that yielded 1 mmol of 2-chloro-4-nitrophenyl 6-azide-6-deoxy-b-maltopentaoside (CNP) per minute. Results and Conclusion: We experimented with co-culture of A. kawachii and lactobacillus in order to get control of pH in altered SLS medium. However, high production of acid-stable α-amylase was not obtained. We experimented with yoghurt or milk made an addition to liquid culture. The result indicated that high production of acid-stable α-amylase (964 U/g-substrate) was obtained when milk made an addition to liquid culture. Phosphate concentration in the liquid medium was a major cause of increased acid-stable α-amylase activity. In liquid culture, acid-stable α-amylase activity was enhanced by milk, but Fats and oils in the milk were oxidized. In addition, Tryptone is not approved as a food additive in Japan. Thus, alter SLS medium added to skim milk excepting for the fats and oils in the milk instead of tryptone. The result indicated that high production of acid-stable α-amylase was obtained with the same effect as milk.

Keywords: acid-stable α-amylase, liquid culture, milk, shochu

Procedia PDF Downloads 278
134 NHS Tayside Plastic Surgery Induction Cheat Sheet and Video

Authors: Paul Holmes, Mike N. G.

Abstract:

Foundation-year doctors face increased stress, pressure and uncertainty when starting new rotations throughout their first years of work. This research questionnaire resulted in an induction cheat sheet and induction video that enhanced the Junior doctor's understanding of how to work effectively within the plastic surgery department at NHS Tayside. The objectives and goals were to improve the transition between cohorts of junior doctors in ward 26 at Ninewells Hospital. Before this quality improvement project, the induction pack was 74 pages long and over eight years old. With the support of consultant Mike Ng a new up-to-date induction was created. This involved a questionnaire and cheat sheet being developed. The questionnaire covered clerking, venipuncture, ward pharmacy, theatres, admissions, specialties on the ward, the cardiac arrest trolley, clinical emergencies, discharges and escalation. This audit has three completed cycles between August 2022 and August 2023. The cheat sheet developed a concise two-page A4 document designed for doctors to be able to reference easily and understand the essentials. The document format is a table containing ward layout; specialty; location; physician associate, shift patterns; ward rounds; handover location and time; hours coverage; senior escalation; nights; daytime duties, meetings/MDTs/board meetings, important bleeps and codes; department guidelines; boarders; referrals and patient stream; pharmacy; absences; rota coordinator; annual leave; top tips. The induction video is a 10-minute in-depth explanation of all aspects of the ward. The video explores in more depth the contents of the cheat sheet. This alternative visual format familiarizes the junior doctor with all aspects of the ward. These were provided to all foundation year 1 and 2 doctors on ward 26 at Ninewells Hospital at NHS Tayside Scotland. This work has since been adopted by the General Surgery Department, which extends to six further wards and has improved the effective handing over of the junior doctor’s role between cohorts. There is potential to further expand the cheat sheet to other departments as the concise document takes around 30 minutes to complete by a doctor who is currently on that ward. The time spent filling out the form provides vital information to the incoming junior doctors, which has a significant possibility to improve patient care.

Keywords: induction, junior doctor, handover, plastic surgery

Procedia PDF Downloads 73
133 Osseointegration Outcomes Following Amputee Lengthening

Authors: Jason Hoellwarth, Atiya Oomatia, Anuj Chavan, Kevin Tetsworth, Munjed Al Muderis

Abstract:

Introduction: Percutaneous EndoProsthetic Osseointegration for Limbs (PEPOL) facilitates improved quality of life (QOL) and objective mobility for most amputees discontent with their traditional socket prosthesis (TSP) experience. Some amputees desiring PEPOL have residual bone much shorter than the currently marketed press-fit implant lengths of 14-16 cm, potentially a risk for failure to integrate. We report on the techniques used, complications experienced, the management of those complications, and the overall mobility outcomes of seven patients who had femur distraction osteogenesis (DO) with a Freedom nail followed by PEPOL. Method: Retrospective evaluation of a prospectively maintained database identified nine patients (5 females) who had transfemoral DO in preparation for PEPOL with two years of follow-up after PEPOL. Six patients had traumatic causes of amputation, one had perinatal complications, one was performed to manage necrotizing fasciitis and one was performed as a result of osteosarcoma. Result: The average age at which DO commenced was 39.4±15.9 years, and seven patients had their amputation more than ten years prior (average 25.5±18.8 years). The residual femurs, on average, started at 102.2±39.7 mm and were lengthened 58.1±20.7 mm, 98±45% of the goal (99±161% of the original bone length). Five patients (56%) had a complication requiring additional surgery: four events of inadequate regeneration were managed with continued lengthening to the desired goal followed by autograft placement harvested from contralateral femur reaming; one patient had the cerclage wires break, which required operative replacement. All patients had osseointegration performed at 355±123 days after the initial lengthening nail surgery. One patient had K-level >2 before DO, at a mean of 3.4±0.6 (2.6-4.4) years following osseointegration. Six patients had K-level >2. The 6-Minute Walk Test remained unchanged (267±56 vs. 308 ± 117 meters). Patient self-rating of prosthesis function, problems, and amputee situation did not significantly change from before DO to after osseointegration. Six patients required additional surgery following osseointegration: six to remove fixation plates placed to maintain distraction osteogenesis length at osseointegration; two required irritation and debridement for infection. Conclusion: Extremely short residual femurs, which make TSP use troublesome, can be lengthened with externally controlled telescoping nails and successfully achieve osseointegration. However, it is imperative to counsel patients that additional surgery to address inadequate regeneration or to remove painful hardware used to maintain fixation may be necessary. This may improve the amputee’s expectations before beginning a potentially arduous process.

Keywords: osseointegration, limb lengthening, quality of life, amputation

Procedia PDF Downloads 59
132 Impact of the Dog-Technic for D1-D4 and Longitudinal Stroke Technique for Diaphragm on Peak Expiratory Flow (PEF) in Asthmatic Patients

Authors: Victoria Eugenia Garnacho-Garnacho, Elena Sonsoles Rodriguez-Lopez, Raquel Delgado-Delgado, Alvaro Otero-Campos, Jesus Guodemar-Perez, Angelo Michelle Vagali, Juan Pablo Hervas-Perez

Abstract:

Asthma is a heterogeneous disease which has always had a drug treatment. Osteopathic treatment that we propose is aimed, seen through a dorsal manipulation (Dog Technic D1-D4) and a technique for diaphragm (Longitudinal Stroke) forced expiratory flow in spirometry changes there are in particular that there is an increase in the volumes of the Peak Flow and Post intervention and effort and that the application of these two techniques together is more powerful if we applied only a Longitudinal (Stroke). Also rating if this type of treatment will have repercussions on breathlessness, a very common symptom in asthma. And finally to investigate if provided vertebra pain decreased after a manipulation. Methods—Participants were recruited between students and professors of the University, aged 18-65, patients (n = 18) were assigned randomly to one of the two groups, group 1 (longitudinal Stroke and manipulation dorsal Dog Technic) and group 2 (diaphragmatic technique, Longitudinal Stroke). The statistical analysis is characterized by the comparison of the main indicator of obstruction of via area PEF (peak expiratory flow) in various situations through the peak flow meter Datospir Peak-10. The measurements were carried out in four phases: at rest, after the stress test, after the treatment, after treatment and the stress test. After each stress test was evaluated, through the Borg scale, the level of Dyspnea on each patient, regardless of the group. In Group 1 in addition to these parameters was calculated using an algometer spinous pain before and after the manipulation. All data were taken at the minute. Results—12 Group 1 (Dog Technic and Longitudinal Stroke) patients responded positively to treatment, there was an increase of 5.1% and 6.1% of the post-treatment PEF and post-treatment, and effort. The results of the scale of Borg by which we measure the level of Dyspnea were positive, a 54.95%, patients noted an improvement in breathing. In addition was confirmed through the means of both groups group 1 in which two techniques were applied was 34.05% more effective than group 2 in which applied only a. After handling pain fell by 38% of the cases. Conclusions—The impact of the technique of Dog-Technic for D1-D4 and the Longitudinal Stroke technique for diaphragm in the volumes of peak expiratory flow (PEF) in asthmatic patients were positive, there was a change of the PEF Post intervention and post-treatment, and effort and showed the most effective group in which only a technique was applied. Furthermore this type of treatment decreased facilitated vertebrae pain and was efficient in the improvement of Dyspnea and the general well-being of the patient.

Keywords: ANS, asthma, manipulation, manual therapy, osteopathic

Procedia PDF Downloads 281
131 Periareolar Zigzag Incision in the Conservative Surgical Treatment of Breast Cancer

Authors: Beom-Seok Ko, Yoo-Seok Kim, Woo-Sung Lim, Ku-Sang Kim, Hyun-Ah Kim, Jin-Sun Lee, An-Bok Lee, Jin-Gu Bong, Tae-Hyun Kim, Sei-Hyun Ahn

Abstract:

Background: Breast conserving surgery (BCS) followed by radiation therapy is today standard therapy for early breast cancer. It is safe therapeutic procedure in early breast cancers, because it provides the same level of overall survival as mastectomy. There are a number of different types of incisions used to BCS. Avoiding scars on the breast is women’s desire. Numerous minimal approaches have evolved due to this concern. Periareolar incision is often used when the small tumor relatively close to the nipple. But periareolar incision has a disadvantages include limited exposure of the surgical field. In plastic surgery, various methods such as zigzag incisions have been recommended to achieve satisfactory esthetic results. Periareolar zigzag incision has the advantage of not only good surgical field but also contributed to better surgical scars. The purpose of this study was to evaluate the oncological safety of procedures by studying the status of the surgical margins of the excised tumor specimen and reduces the need for further surgery. Methods: Between January 2016 and September 2016, 148 women with breast cancer underwent BCS or mastectomy by the same surgeon in ASAN medical center. Patients with exclusion criteria were excluded from this study if they had a bilateral breast cancer or underwent resection of the other tumors or taken axillary dissection or performed other incision methods. Periareolar zigzag incision was performed and excision margins of the specimen were identified frozen sections and paraffin-embedded or permanent sections in all patients in this study. We retrospectively analyzed tumor characteristics, the operative time, size of specimen, the distance from the tumor to nipple. Results: A total of 148 patients were reviewed, 72 included in the final analysis, 76 excluded. The mean age of the patients was 52.6 (range 25-19 years), median tumor size was 1.6 cm (range, 0.2-8.8), median tumor distance from the nipple was 4.0 cm (range, 1.0-9.0), median excised specimen sized was 5.1 cm (range, 2.8-15.0), median operation time was 70.0 minute (range, 39-138). All patients were discharged with no sign of infection or skin necrosis. Free resection margin was confirmed by frozen biopsy and permanent biopsy in all samples. There were no patients underwent reoperation. Conclusions: We suggest that periareolar zigzag incision can provide a good surgical field to remove a relatively large tumor and may provide cosmetically good outcomes.

Keywords: periareolar zigzag incision, breast conserving surgery, breast cancer, resection margin

Procedia PDF Downloads 218
130 Exploring the Intersection Between the General Data Protection Regulation and the Artificial Intelligence Act

Authors: Maria Jędrzejczak, Patryk Pieniążek

Abstract:

The European legal reality is on the eve of significant change. In European Union law, there is talk of a “fourth industrial revolution”, which is driven by massive data resources linked to powerful algorithms and powerful computing capacity. The above is closely linked to technological developments in the area of artificial intelligence, which has prompted an analysis covering both the legal environment as well as the economic and social impact, also from an ethical perspective. The discussion on the regulation of artificial intelligence is one of the most serious yet widely held at both European Union and Member State level. The literature expects legal solutions to guarantee security for fundamental rights, including privacy, in artificial intelligence systems. There is no doubt that personal data have been increasingly processed in recent years. It would be impossible for artificial intelligence to function without processing large amounts of data (both personal and non-personal). The main driving force behind the current development of artificial intelligence is advances in computing, but also the increasing availability of data. High-quality data are crucial to the effectiveness of many artificial intelligence systems, particularly when using techniques involving model training. The use of computers and artificial intelligence technology allows for an increase in the speed and efficiency of the actions taken, but also creates security risks for the data processed of an unprecedented magnitude. The proposed regulation in the field of artificial intelligence requires analysis in terms of its impact on the regulation on personal data protection. It is necessary to determine what the mutual relationship between these regulations is and what areas are particularly important in the personal data protection regulation for processing personal data in artificial intelligence systems. The adopted axis of considerations is a preliminary assessment of two issues: 1) what principles of data protection should be applied in particular during processing personal data in artificial intelligence systems, 2) what regulation on liability for personal data breaches is in such systems. The need to change the regulations regarding the rights and obligations of data subjects and entities processing personal data cannot be excluded. It is possible that changes will be required in the provisions regarding the assignment of liability for a breach of personal data protection processed in artificial intelligence systems. The research process in this case concerns the identification of areas in the field of personal data protection that are particularly important (and may require re-regulation) due to the introduction of the proposed legal regulation regarding artificial intelligence. The main question that the authors want to answer is how the European Union regulation against data protection breaches in artificial intelligence systems is shaping up. The answer to this question will include examples to illustrate the practical implications of these legal regulations.

Keywords: data protection law, personal data, AI law, personal data breach

Procedia PDF Downloads 47
129 Postfeminism, Femvertising and Inclusion: An Analysis of Changing Women's Representation in Contemporary Media

Authors: Saveria Capecchi

Abstract:

In this paper, the results of qualitative content research on postfeminist female representation in contemporary Western media (advertising, television series, films, social media) are presented. Female role models spectacularized in media culture are an important part of the development of social identities and could inspire new generations. Postfeminist cultural texts have given rise to heated debate between gender and media studies scholars. There are those who claim they are commercial products seeking to sell feminism to women, a feminism whose political and subversive role is completely distorted and linked to the commercial interests of the cosmetics, fashion, fitness and cosmetic surgery industries, in which women’s ‘power’ lies mainly in their power to seduce. There are those who consider them feminist manifestos because they represent independent ‘modern women’ free from male control who aspire to achieve professionally and overcome gender stereotypes like that of the ‘housewife-mother’. Major findings of the research show that feminist principles have been gradually absorbed by the cultural industry and adapted to its commercial needs, resulting in the dissemination of contradictory values. On the one hand, in line with feminist arguments, patriarchal ideology is condemned and the concepts of equality and equal opportunity between men and women are promoted. On the other hand, feminist principles and demands are ascribed to individualism, which translates into the slogan: women are free to decide for themselves, even to objectify their own bodies. In particular, it is observed that femvertising trend in media industry is changing female representation moving away from classic stereotypes: the feminine beauty ideal of slenderness, emphasized in the media since the seventies, is ultimately challenged by the ‘curvy’ body model, which is considered to be more inclusive and based on the concept of ‘natural beauty’. Another aspect of change is the ‘anti-romantic’ revolution performed by some heroines, who are not in search of Prince Charming, in television drama and in the film industry. In conclusion, although femvertising tends to simplify and trivialize the concepts characterizing fourth-wave feminism (‘intersectionality’ and ‘inclusion’), it is also a tendency that enables the challenging of media imagery largely based on male viewpoints, interests and desires.

Keywords: feminine beauty ideal, femvertising, gender and media, postfeminism

Procedia PDF Downloads 136
128 Dynamic Building Simulation Based Study to Understand Thermal Behavior of High-Rise Structural Timber Buildings

Authors: Timothy O. Adekunle, Sigridur Bjarnadottir

Abstract:

Several studies have investigated thermal behavior of buildings with limited studies focusing on high-rise buildings. Of the limited investigations that have considered thermal performance of high-rise buildings, only a few studies have considered thermal behavior of high-rise structural sustainable buildings. As a result, this study investigates the thermal behavior of a high-rise structural timber building. The study aims to understand the thermal environment of a high-rise structural timber block of apartments located in East London, UK by comparing the indoor environmental conditions at different floors (ground and upper floors) of the building. The environmental variables (temperature and relative humidity) were measured at 15-minute intervals for a few weeks in the summer of 2012 to generate data that was considered for calibration and validation of the simulated results. The study employed mainly dynamic thermal building simulation using DesignBuilder by EnergyPlus and supplemented with environmental monitoring as major techniques for data collection and analysis. The weather file (Test Reference Years- TRYs) for the 2000s from the weather generator carried out by the Prometheus Group was considered for the simulation since the study focuses on investigating thermal behavior of high-rise structural timber buildings in the summertime and not in extreme summertime. In this study, the simulated results (May-September of the 2000s) will be the focus of discussion, but the results will be briefly compared with the environmental monitoring results. The simulated results followed a similar trend with the findings obtained from the short period of the environmental monitoring at the building. The results revealed lower temperatures are often predicted (at least 1.1°C lower) at the ground floor than the predicted temperatures at the upper floors. The simulated results also showed that higher temperatures are predicted in spaces at southeast facing (at least 0.5°C higher) than spaces in other orientations across the floors considered. There is, however, a noticeable difference between the thermal environment of spaces when the results obtained from the environmental monitoring are compared with the simulated results. The field survey revealed higher temperatures were recorded in the living areas (at least 1.0°C higher) while higher temperatures are predicted in bedrooms (at least 0.9°C) than living areas for the simulation. In addition, the simulated results showed spaces on lower floors of high-rise structural timber buildings are predicted to provide more comfortable thermal environment than spaces on upper floors in summer, but this may not be the same in wintertime due to high upward movement of hot air to spaces on upper floors.

Keywords: building simulation, high-rise, structural timber buildings, sustainable, temperatures, thermal behavior

Procedia PDF Downloads 169
127 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete

Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan

Abstract:

Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.

Keywords: characterization, furnace, pozzolanic behavior, rice husk ash

Procedia PDF Downloads 99
126 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data

Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton

Abstract:

The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.

Keywords: analytics, digitization, industry 4.0, manufacturing

Procedia PDF Downloads 99
125 Deep Learning for Image Correction in Sparse-View Computed Tomography

Authors: Shubham Gogri, Lucia Florescu

Abstract:

Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.

Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net

Procedia PDF Downloads 140
124 An Econometric Analysis of the Flat Tax Revolution

Authors: Wayne Tarrant, Ethan Petersen

Abstract:

The concept of a flat tax goes back to at least the Biblical tithe. A progressive income tax was first vociferously espoused in a small, but famous, pamphlet in 1848 (although England had an emergency progressive tax for war costs prior to this). Within a few years many countries had adopted the progressive structure. The flat tax was only reinstated in some small countries and British protectorates until Mart Laar was elected Prime Minister of Estonia in 1992. Since Estonia’s adoption of the flat tax in 1993, many other formerly Communist countries have likewise abandoned progressive income taxes. Economists had expectations of what would happen when a flat tax was enacted, but very little work has been done on actually measuring the effect. With a testbed of 21 countries in this region that currently have a flat tax, much comparison is possible. Several countries have retained progressive taxes, giving an opportunity for contrast. There are also the cases of Czech Republic and Slovakia, which have adopted and later abandoned the flat tax. Further, with over 20 years’ worth of economic history in some flat tax countries, we can begin to do some serious longitudinal study. In this paper we consider many economic variables to determine if there are statistically significant differences from before to after the adoption of a flat tax. We consider unemployment rates, tax receipts, GDP growth, Gini coefficients, and market data where the data are available. Comparisons are made through the use of event studies and time series methods. The results are mixed, but we draw statistically significant conclusions about some effects. We also look at the different implementations of the flat tax. In some countries there are equal income and corporate tax rates. In others the income tax has a lower rate, while in others the reverse is true. Each of these sends a clear message to individuals and corporations. The policy makers surely have a desired effect in mind. We group countries with similar policies, try to determine if the intended effect actually occurred, and then report the results. This is a work in progress, and we welcome the suggestion of variables to consider. Further, some of the data from before the fall of the Iron Curtain are suspect. Since there are new ruling regimes in these countries, the methods of computing different statistical measures has changed. Although we first look at the raw data as reported, we also attempt to account for these changes. We show which data seem to be fictional and suggest ways to infer the needed statistics from other data. These results are reported beside those on the reported data. Since there is debate about taxation structure, this paper can help inform policymakers of change the flat tax has caused in other countries. The work shows some strengths and weaknesses of a flat tax structure. Moreover, it provides beginnings of a scientific analysis of the flat tax in practice rather than having discussion based solely upon theory and conjecture.

Keywords: flat tax, financial markets, GDP, unemployment rate, Gini coefficient

Procedia PDF Downloads 328
123 Data Refinement Enhances The Accuracy of Short-Term Traffic Latency Prediction

Authors: Man Fung Ho, Lap So, Jiaqi Zhang, Yuheng Zhao, Huiyang Lu, Tat Shing Choi, K. Y. Michael Wong

Abstract:

Nowadays, a tremendous amount of data is available in the transportation system, enabling the development of various machine learning approaches to make short-term latency predictions. A natural question is then the choice of relevant information to enable accurate predictions. Using traffic data collected from the Taiwan Freeway System, we consider the prediction of short-term latency of a freeway segment with a length of 17 km covering 5 measurement points, each collecting vehicle-by-vehicle data through the electronic toll collection system. The processed data include the past latencies of the freeway segment with different time lags, the traffic conditions of the individual segments (the accumulations, the traffic fluxes, the entrance and exit rates), the total accumulations, and the weekday latency profiles obtained by Gaussian process regression of past data. We arrive at several important conclusions about how data should be refined to obtain accurate predictions, which have implications for future system-wide latency predictions. (1) We find that the prediction of median latency is much more accurate and meaningful than the prediction of average latency, as the latter is plagued by outliers. This is verified by machine-learning prediction using XGBoost that yields a 35% improvement in the mean square error of the 5-minute averaged latencies. (2) We find that the median latency of the segment 15 minutes ago is a very good baseline for performance comparison, and we have evidence that further improvement is achieved by machine learning approaches such as XGBoost and Long Short-Term Memory (LSTM). (3) By analyzing the feature importance score in XGBoost and calculating the mutual information between the inputs and the latencies to be predicted, we identify a sequence of inputs ranked in importance. It confirms that the past latencies are most informative of the predicted latencies, followed by the total accumulation, whereas inputs such as the entrance and exit rates are uninformative. It also confirms that the inputs are much less informative of the average latencies than the median latencies. (4) For predicting the latencies of segments composed of two or three sub-segments, summing up the predicted latencies of each sub-segment is more accurate than the one-step prediction of the whole segment, especially with the latency prediction of the downstream sub-segments trained to anticipate latencies several minutes ahead. The duration of the anticipation time is an increasing function of the traveling time of the upstream segment. The above findings have important implications to predicting the full set of latencies among the various locations in the freeway system.

Keywords: data refinement, machine learning, mutual information, short-term latency prediction

Procedia PDF Downloads 160
122 Time of Death Determination in Medicolegal Death Investigations

Authors: Michelle Rippy

Abstract:

Medicolegal death investigation historically is a field that does not receive much research attention or advancement, as all of the subjects are deceased. Public health threats, drug epidemics and contagious diseases are typically recognized in decedents first, with thorough and accurate death investigations able to assist in epidemiology research and prevention programs. One vital component of medicolegal death investigation is determining the decedent’s time of death. An accurate time of death can assist in corroborating alibies, determining sequence of death in multiple casualty circumstances and provide vital facts in civil situations. Popular television portrays an unrealistic forensic ability to provide the exact time of death to the minute for someone found deceased with no witnesses present. The actuality of unattended decedent time of death determination can generally only be narrowed to a 4-6 hour window. In the mid- to late-20th century, liver temperatures were an invasive action taken by death investigators to determine the decedent’s core temperature. The core temperature was programmed into an equation to determine an approximate time of death. Due to many inconsistencies with the placement of the thermometer and other variables, the accuracy of the liver temperatures was dispelled and this once common place action lost scientific support. Currently, medicolegal death investigators utilize three major after death or post-mortem changes at a death scene. Many factors are considered in the subjective determination as to the time of death, including the cooling of the decedent, stiffness of the muscles, release of blood internally, clothing, ambient temperature, disease and recent exercise. Current research is utilizing non-invasive hospital grade tympanic thermometers to measure the temperature in the each of the decedent’s ears. This tool can be used at the scene and in conjunction with scene indicators may provide a more accurate time of death. The research is significant and important to investigations and can provide an area of accuracy to a historically inaccurate area, considerably improving criminal and civil death investigations. The goal of the research is to provide a scientific basis to unwitnessed deaths, instead of the art that the determination currently is. The research is currently in progress with expected termination in December 2018. There are currently 15 completed case studies with vital information including the ambient temperature, decedent height/weight/sex/age, layers of clothing, found position, if medical intervention occurred and if the death was witnessed. This data will be analyzed with the multiple variables studied and available for presentation in January 2019.

Keywords: algor mortis, forensic pathology, investigations, medicolegal, time of death, tympanic

Procedia PDF Downloads 106
121 Production of Bricks Using Mill Waste and Tyre Crumbs at a Low Temperature by Alkali-Activation

Authors: Zipeng Zhang, Yat C. Wong, Arul Arulrajah

Abstract:

Since automobiles became widely popular around the early 20th century, end-of-life tyres have been one of the major types of waste humans encounter. Every minute, there are considerable quantities of tyres being disposed of around the world. Most end-of-life tyres are simply landfilled or simply stockpiled, other than recycling. To address the potential issues caused by tyre waste, incorporating it into construction materials can be a possibility. This research investigated the viability of manufacturing bricks using mill waste and tyre crumb by alkali-activation at a relatively low temperature. The mill waste was extracted from a brick factory located in Melbourne, Australia, and the tyre crumbs were supplied by a local recycling company. As the main precursor, the mill waste was activated by the alkaline solution, which was comprised of sodium hydroxide (8m) and sodium silicate (liquid). The introduction ratio of alkaline solution (relative to the solid weight) and the weight ratio between sodium hydroxide and sodium silicate was fixed at 20 wt.% and 1:1, respectively. The tyre crumb was introduced to substitute part of the mill waste at four ratios by weight, namely 0, 5, 10 and 15%. The mixture of mill waste and tyre crumbs were firstly dry-mixed for 2 min to ensure the homogeneity, followed by a 2.5-min wet mixing after adding the solution. The ready mixture subsequently was press-moulded into blocks with the size of 109 mm in length, 112.5 mm in width and 76 mm in height. The blocks were cured at 50°C with 95% relative humidity for 2 days, followed by a 110°C oven-curing for 1 day. All the samples were then placed under the ambient environment until the age of 7 and 28 days for testing. A series of tests were conducted to evaluate the linear shrinkage, compressive strength and water absorption of the samples. In addition, the microstructure of the samples was examined via the scanning electron microscope (SEM) test. The results showed the highest compressive strength was 17.6 MPa, found in the 28-day-old group using 5 wt.% tyre crumbs. Such strength has been able to satisfy the requirement of ASTM C67. However, the increasing addition of tyre crumb weakened the compressive strength of samples. Apart from the strength, the linear shrinkage and water absorption of all the groups can meet the requirements of the standard. It is worth noting that the use of tyre crumbs tended to decrease the shrinkage and even caused expansion when the tyre content was over 15 wt.%. The research also found that there was a significant reduction in compressive strength for the samples after water absorption tests. In conclusion, the tyre crumbs have the potential to be used as a filler material in brick manufacturing, but more research needs to be done to tackle the durability problem in the future.

Keywords: bricks, mill waste, tyre crumbs, waste recycling

Procedia PDF Downloads 115
120 Intersection of Sports and Society

Authors: Josh Felton

Abstract:

There’s a common misconception that sports is an escape from the reality of life, and that it is what disconnects us from the agendas of tomorrow. While this may be true for a select few, there’s more to sports than just competition and banter. The bearing and impact society has on the sports we know and love has always existed and is greater than ever. However, to many in the national media, it is almost seen as a taboo subject. Whether one realizes it or not, sports and society intersect at every turn and it’s not a coincidence. In collaboration with the Woodrow Wilson Fellowship at Johns Hopkins University, a video and podcast series titled Intersection of sports and society (ISS), dedicated to studying some of the most polarizing and some of the least recognized issues in the world of sports that have a powerful social bearing on every demographic will debut in the Summer of 2023. Issues like race, gender, and sexuality, as well as how they have been challenged and addressed historically in the sports realm will be discussed to a great extent in the series. With the collaboration of many authors, researchers, and former athletes, the podcast will be a platform for them to not only share their discoveries but to have an extensive dialogue on the impact their work and current events have had on the issues. Set to be released in the summer of 2023, the series will have a list of great researchers and authors, headlined by New York Times writer and best-selling author Jonathan Abrams, who in 2017, published a book titled Boys Among Men: How the Prep-to-Pro Generation Redefined the NBA and Sparked a Basketball Revolution. His expertise on the matters of the high school and collegiate sports will be reflected in a very important conversation on the evolution of the high school-to-professional route, the historic exploitation of black student athletes by the NCAA, and how the new rules allowing for greater freedom of choice for young athletes has benefitted minority athletes coming from impoverished backgrounds. This episode is just a preview of a list of important topics that to the author’s best knowledge aren't typically discussed by the national media. Many more topics include women’s sports representation, the struggle for achieving fair minority representation in NFL coaching and front office positions, the story of race and baseball within the Boston Red Sox organization, and what the rise of the black quarterback means for America. Many people fail to realize how the sports we all know and love have any social bearing on them and the athletes who play them. The hope with this project is to shed light on the social relevance that exists in the realm of sports, where we have for years failed to see and acknowledge a connection between sports and society.

Keywords: sports, society, race, gender

Procedia PDF Downloads 94
119 Impact of Financial and Nutrition Support on Blood Health, Dietary Intake, and Well-Being among Female Student-Athletes

Authors: Kaila A. Vento

Abstract:

Within the field of sports science, financial situations have been reported as a key barrier in purchasing high-quality foods. A lack of proper nutrition leads to insecurities of health, impairs training, and diminishes optimal performances. Consequently, insufficient nutrient intake, disordered eating patterns, and eating disorders may arise, leading to poor health and well-being. Athletic scholarships, nutrition resources, and meal programs are available, yet are disproportionally allocated, favoring male sports, Caucasian athletes, and higher sport levels. Direct athlete finances towards nutrition at various sport levels and the role race influences aid received has yet to be examined. Additionally, a diverse female athlete population is missing in the sports science literature, specifically in nutrition. To address this gap, the current project assesses how financial and nutrition support and nutrition knowledge impacts physical health, dietary intake, and overall quality of life of a diverse sample of female athletes at the National Collegiate Athletic Association (NCAA), National Junior Collegiate Athletic Association (NJCAA), and cub sport levels. The project will identify differences in financial support in relation to race, as well. Approximately (N = 120) female athletes will participate in a single 30-minute lab visit. At this visit, body composition (i.e., height, weight, body mass index, and fat percent), blood health indicators (fasted blood glucose and lipids), and resting blood pressure are measured. In addition, three validated questionnaires pertaining to nutrition knowledge (Sports Nutrition Knowledge Questionnaire; SNKQ), dietary intake (Rapid Eating Assessment for Participants; REAP), and quality of life (World Health Organization Quality of Life Brief; WHOQL-B) are gathered. Body composition and blood health indicators will be compared with the results of self-reported sports nutrition knowledge, dietary intake, and quality of life questionnaires. It is hypothesized that 1) financial and nutrition support and nutrition knowledge will differ between the sport levels and 2) financial and nutrition support and nutrition knowledge will have a positive association with quality of dietary intake and blood health indicators, 3) financial and nutrition support will differ significantly among racial background across the various competition levels, and 4) dietary intake will influence blood health indicators and quality of life. The findings from this study could have positive implications on athletic associations' policies on equity of financial and nutrition support to improve the health and safety of all female athletes across several sport levels.

Keywords: athlete, equity, finances, health, resources

Procedia PDF Downloads 95
118 Population Centralization in Urban Area and Metropolitans in Developing Countries: A Case Study of Urban Centralization in Iran

Authors: Safar Ghaedrahmati, Leila Soltani

Abstract:

Population centralization in urban area and metropolitans, especially in developing countries such as Iran increase metropolitan's problems. For few decades, the population of cities in developing countries, including Iran had a higher growth rate than the total growth rate of countries’ population. While in developed countries, the development of the big cities began decades ago and generally allowed for controlled and planned urban expansion, the opposite is the case in developing countries, where rapid urbanization process is characterized by an unplanned existing urban expansion. The developing metropolitan cities have enormous difficulties in coping both with the natural population growth and the urban physical expansion. Iranian cities are usually the heart of economic and cultural changes that have occurred after the Islamic revolution in 1979. These cities are increasingly having impacts via political–economical arrangement and chiefly by urban management structures. Structural features have led to the population growth of cities and urbanization (in number, population and physical frame) and the main problems in them. On the other hand, the lack of birth control policies and the deceptive attractions of cities, particularly big cities, and the birth rate has shot up, something which has occurred mainly in rural regions and small cities. The population of Iran has increased rapidly since 1956. The 1956 and 1966 decennial censuses counted the population of Iran at 18.9 million and 25.7 million, respectively, with a 3.1% annual growth rate during the 1956–1966 period. The 1976 and 1986 decennial censuses counted Iran’s population at 33.7 and 49.4 million, respectively, a 2.7% and 3.9% annual growth rate during the 1966–1976 and 1976–1986 periods. The 1996 count put Iran’s population at 60 million, a 1.96% annual growth rate from 1986–1996 and the 2006 count put Iran population at 72 million. A recent major policy of urban economic and industrial decentralization is a persistent program of the government. The policy has been identified as a result of the massive growth of Tehran in the recent years, up to 9 million by 2010. Part of the growth of the capitally resulted from the lack of economic opportunities elsewhere and in order to redress the developing primacy of Tehran and the domestic pressures which it is undergoing, the policy of decentralization is to be implemented as quickly as possible. Type of research is applied and method of data collection is documentary and methods of analysis are; population analysis with urban system analysis and urban distribution system

Keywords: population centralization, cities of Iran, urban centralization, urban system

Procedia PDF Downloads 288
117 Comparative Comparison (Cost-Benefit Analysis) of the Costs Caused by the Earthquake and Costs of Retrofitting Buildings in Iran

Authors: Iman Shabanzadeh

Abstract:

Earthquake is known as one of the most frequent natural hazards in Iran. Therefore, policy making to improve the strengthening of structures is one of the requirements of the approach to prevent and reduce the risk of the destructive effects of earthquakes. In order to choose the optimal policy in the face of earthquakes, this article tries to examine the cost of financial damages caused by earthquakes in the building sector and compare it with the costs of retrofitting. In this study, the results of adopting the scenario of "action after the earthquake" and the policy scenario of "strengthening structures before the earthquake" have been collected, calculated and finally analyzed by putting them together. Methodologically, data received from governorates and building retrofitting engineering companies have been used. The scope of the study is earthquakes occurred in the geographical area of Iran, and among them, eight earthquakes have been specifically studied: Miane, Ahar and Haris, Qator, Momor, Khorasan, Damghan and Shahroud, Gohran, Hormozgan and Ezgole. The main basis of the calculations is the data obtained from retrofitting companies regarding the cost per square meter of building retrofitting and the data of the governorate regarding the power of earthquake destruction, the realized costs for the reconstruction and construction of residential units. The estimated costs have been converted to the value of 2021 using the time value of money method to enable comparison and aggregation. The cost-benefit comparison of the two policies of action after the earthquake and retrofitting before the earthquake in the eight earthquakes investigated shows that the country has suffered five thousand billion Tomans of losses due to the lack of retrofitting of buildings against earthquakes. Based on the data of the Budget Law's of Iran, this figure was approximately twice the budget of the Ministry of Roads and Urban Development and five times the budget of the Islamic Revolution Housing Foundation in 2021. The results show that the policy of retrofitting structures before an earthquake is significantly more optimal than the competing scenario. The comparison of the two policy scenarios examined in this study shows that the policy of retrofitting buildings before an earthquake, on the one hand, prevents huge losses, and on the other hand, by increasing the number of earthquake-resistant houses, it reduces the amount of earthquake destruction. In addition to other positive effects of retrofitting, such as the reduction of mortality due to earthquake resistance of buildings and the reduction of other economic and social effects caused by earthquakes. These are things that can prove the cost-effectiveness of the policy scenario of "strengthening structures before earthquakes" in Iran.

Keywords: disaster economy, earthquake economy, cost-benefit analysis, resilience

Procedia PDF Downloads 45
116 The Effects of Chamomile on Serum Levels of Inflammatory Indexes to a Bout of Eccentric Exercise in Young Women

Authors: K. Azadeh, M. Ghasemi, S. Fazelifar

Abstract:

Aim: Changes in stress hormones can be modify response of immune system. Cortisol as the most important body corticosteroid is anti-inflammatory and immunosuppressive hormone. Normal levels of cortisol in humans has fluctuated during the day, In other words, cortisol is released periodically, and regulate through the release of ACTH circadian rhythm in every day. Therefore, the aim of this study was to determine the effects of Chamomile on serum levels of inflammatory indexes to a bout of eccentric exercise in young women. Methodology: 32 women were randomly divided into 4 groups: high dose of Chamomile, low dose of Chamomile, ibuprofen and placebo group. Eccentric exercise included 5 set and rest period between sets was 1 minute. For this purpose, subjects warm up 10 min and then done eccentric exercise. Each participant completed 15 repetitions with optional 20 kg weight or until can’t continue moving. When the subject was no longer able to continue to move, immediately decreased 5 kg from the weight and the protocol continued until cause exhaustion or complete 15 repetitions. Also, subjects received specified amount of ibuprofen and Chamomile capsules in target groups. Blood samples in 6 stages (pre of starting pill, pre of exercise protocol, 4, 24, 48 and 72 hours after eccentric exercise) was obtained. The levels of cortisol and adrenocorticotropic hormone levels were measured by ELISA way. K-S test to determine the normality of the data and analysis of variance for repeated measures was used to analyze the data. A significant difference in the p < 0/05 accepted. Results: The results showed that Individual characteristics including height, weight, age and body mass index were not significantly different among the four groups. Analyze of data showed that cortisol and ACTH basic levels significantly decreased after supplementation consumption, but then gradually significantly increased in all stages of post exercise. In High dose of Chamomile group, increasing tendency of post exercise somewhat less than other groups, but not to a significant level. The inter-group analysis results indicate that time effect had a significant impact in different stages of the groups. Conclusion: The results of this study, one session of eccentric exercise increased cortisol and ACTH hormone. The results represent the effect of high dose of Chamomile in the prevention and reduction of increased stress hormone levels. As regards use of medicinal plants and ibuprofen as a pain medication and inflammation has spread among athletes and non-athletes, the results of this research can provide information about the advantages and disadvantages of using medicinal plants and ibuprofen.

Keywords: chamomile, inflammatory indexes, eccentric exercise, young girls

Procedia PDF Downloads 409
115 Walking across the Government of Egypt: A Single Country Comparative Study of the Past and Current Condition of the Government of Egypt

Authors: Homyr L. Garcia, Jr., Anne Margaret A. Rendon, Carla Michaela B. Taguinod

Abstract:

Nothing is constant in this world but change. This is the reality wherein a lot of people fail to recognize and maybe, it is because of the fact that some see things that are happening with little value or no value at all until it’s gone. For the past years, Egypt was known for its stable government. It was able to withstand a lot of problems and crisis which challenged their country in ways which can never be imagined. In the present time, it seems like in just a snap of a finger, the said stability vanished and it was immediately replaced by a crisis which resulted to a failure in some parts of their government. In addition, this problem continued to worsen and the current situation of Egypt is just a reflection or a result of it. On the other hand, as the researchers continued to study the reasons why the government of Egypt is unstable, they concluded that there might be a possibility that they will be able to produce ways in which their country could be helped or improved. The instability of the government of Egypt is the product of combining all the problems which affects the lives of the people. Some of the reasons that the researchers found are the following: 1) unending doubts of the people regarding the ruling capacity of elected presidents, 2) removal of President Mohamed Morsi in position, 3) economic crisis, 4) a lot of protests and revolution happened, 5) resignation of the long term President Hosni Mubarak and 6) the office of the President is most likely available only to the chosen successor. Also, according to previous researches, there are two plausible scenarios for the instability of Egypt: 1) a military intervention specifically the Supreme Council of the Armed Forces or SCAF, resulting from a contested succession and 2) an Islamist push for political power which highlights the claim that religion is a hindrance towards the development of their country and government. From the eight possible reasons, the researchers decided that they will be focusing on economic crisis since the instability is more clearly seen in the country’s economy which directly affects the people and the government itself. In addition, they made a hypothesis which states that stable economy is a prerequisite towards a stable government. If they will be able to show how this claim is true by using the Social Autopsy Research Design for the qualitative method and Pearson’s correlation coefficient for the quantitative method, the researchers might be able to produce a proposal on how Egypt can stabilize their government and avoid such problems. Also, the hypothesis will be based from the Rational Action Theory which is a theory for understanding and modeling social and economy as well as individual behavior.

Keywords: Pearson’s correlation coefficient, rational action theory, social autopsy research design, supreme council of the armed forces (SCAF)

Procedia PDF Downloads 395
114 21st Century Business Dynamics: Acting Local and Thinking Global through Extensive Business Reporting Language (XBRL)

Authors: Samuel Faboyede, Obiamaka Nwobu, Samuel Fakile, Dickson Mukoro

Abstract:

In the present dynamic business environment of corporate governance and regulations, financial reporting is an inevitable and extremely significant process for every business enterprise. Several financial elements such as Annual Reports, Quarterly Reports, ad-hoc filing, and other statutory/regulatory reports provide vital information to the investors and regulators, and establish trust and rapport between the internal and external stakeholders of an organization. Investors today are very demanding, and emphasize greatly on authenticity, accuracy, and reliability of financial data. For many companies, the Internet plays a key role in communicating business information, internally to management and externally to stakeholders. Despite high prominence being attached to external reporting, it is disconnected in most companies, who generate their external financial documents manually, resulting in high degree of errors and prolonged cycle times. Chief Executive Officers and Chief Financial Officers are increasingly susceptible to endorsing error-laden reports, late filing of reports, and non-compliance with regulatory acts. There is a lack of common platform to manage the sensitive information – internally and externally – in financial reports. The Internet financial reporting language known as eXtensible Business Reporting Language (XBRL) continues to develop in the face of challenges and has now reached the point where much of its promised benefits are available. This paper looks at the emergence of this revolutionary twenty-first century language of digital reporting. It posits that today, the world is on the brink of an Internet revolution that will redefine the ‘business reporting’ paradigm. The new Internet technology, eXtensible Business Reporting Language (XBRL), is already being deployed and used across the world. It finds that XBRL is an eXtensible Markup Language (XML) based information format that places self-describing tags around discrete pieces of business information. Once tags are assigned, it is possible to extract only desired information, rather than having to download or print an entire document. XBRL is platform-independent and it will work on any current or recent-year operating system, or any computer and interface with virtually any software. The paper concludes that corporate stakeholders and the government cannot afford to ignore the XBRL. It therefore recommends that all must act locally and think globally now via the adoption of XBRL that is changing the face of worldwide business reporting.

Keywords: XBRL, financial reporting, internet, internal and external reports

Procedia PDF Downloads 270
113 Agroecology: Rethink the Local in the Global to Promote the Creation of Novelties

Authors: Pauline Cuenin, Marcelo Leles Romarco Oliveira

Abstract:

Based on their localities and following their ecological rationality, family-based farmers have experimented, adapted and innovated to improve their production systems continuously for millennia. With the technological package transfer processes of the so-called Green Revolution for agricultural holdings, farmers have become increasingly dependent on ready-made "recipes" built from so-called "universal" and global knowledge to face the problems that emerge in the management of local agroecosystems, thus reducing their creative and experiential capacities. However, the production of novelties within farms is fundamental to the transition to more sustainable agro food systems. In fact, as the fruits of local knowledge and / or the contextualization of exogenous knowledge, novelties are seen as seeds of transition. By presenting new techniques, new organizational forms and epistemological approaches, agroecology was pointed out as a way to encourage and promote the creative capacity of farmers. From this perspective, this theoretical work aims to analyze how agroecology encourages the innovative capacity of farmers, and in general, the production of novelties. For this, an analysis was made of the theoretical and methodological bases of agroecology through a literature review, specifically looking for the way in which it articulates the local with the global, complemented by an analysis of agro ecological Brazilian experiences. It was emphasized that, based on the peasant way of doing agriculture, that is, on ecological / social co-evolution or still called co-production (interaction between human beings and living nature), agroecology recognizes and revalues peasant involves the deep interactions of the farmer with his site (bio-physical and social). As a "place science," practice and movement, it specifically takes into consideration the local and empirical knowledge of farmers, which allows questioning and modifying the paradigms that underpin the current agriculture that have disintegrated farmers' creative processes. In addition to upgrade the local, agroecology allows the dialogue of local knowledge with global knowledge, essential in the process of changes to get out of the dominant logic of thought and give shape to new experiences. In order to reach this articulation, agroecology involves new methodological focuses seeking participatory methods of study and intervention that express themselves in the form of horizontal spaces of socialization and collective learning that involve several actors with different knowledge. These processes promoted by agroecology favor the production of novelties at local levels for expansion at other levels, such as the global, through trans local agro ecological networks.

Keywords: agroecology, creativity, global, local, novelty

Procedia PDF Downloads 209
112 Turkey at the End of the Second Decade of the 21st Century: A Secular or Religious Country?

Authors: Francesco Pisano

Abstract:

Islam has been an important topic in Turkey’s institutional identity. Since the dawn of the Turkish Republic, at the end of the First World War, the new Turkish leadership was urged to deal with the religious heritage of the Sultanate. Mustafa Kemal Ataturk, Turkey’s first President, led the country in a process of internal change, substantially modifying not merely the democratic stance of it, but also the way politics was addressing the Muslim faith. Islam was banned from the public sector of the society and was drastically marginalized to the mere private sphere of citizens’ lives. Headscarves were banned from institutional buildings together with any other religious practice, while the country was proceeding down a path of secularism and Westernization. This issue is demonstrated by the fact that even a new elected Prime Minister, Recep Tayyip Erdoğan, was initially barred from taking the institutional position, because of allegations that he had read a religious text while campaigning. Over the years, thanks to this initial internal shift, Turkey has often been seen by Western partners as one of the few countries that had managed to find a perfect balance between a democratic stance and an Islamic inherent nature. In the early 2000s, this led many academics to believe that Ankara could eventually have become the next European capital. Since then, the internal and external landscape of Turkey has drastically changed. Today, religion has returned to be an important point of reference for Turkish politics, considering also the failure of the European negotiations and the always more unstable external environment of the country. This paper wants to address this issue, looking at the important role religion has covered in the Turkish society and the way it has been politicized since the early years of the Republic. It will evolve from a more theoretical debate on secularism and the path of political westernization of Turkey under Ataturk’s rule to a more practical analysis of today’s situation, passing through the failure of Ankara’s accession into the EU and the current tense political relation with its traditional NATO allies. The final objective of this research, therefore, is not to offer a meticulous opinion on Turkey’s current international stance. This issue will be left entirely to the personal consideration of the reader. Rather, it will supplement the existing literature with a comprehensive and more structured analysis on the role Islam has played on Turkish politics since the early 1920s up until the political domestic revolution of the early 2000s, after the first electoral win of the Justice and Development Party (AKP).

Keywords: democracy, Islam, Mustafa Kemal Atatürk, Recep Tayyip Erdoğan, Turkey

Procedia PDF Downloads 194
111 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 223
110 Providing Health Promotion Information by Digital Animation to International Visitors in Japan: A Factorial Design View of Nurses

Authors: Mariko Nishikawa, Masaaki Yamanaka, Ayami Kondo

Abstract:

Background: International visitors to Japan are at a risk of travel-related illnesses or injury that could result in hospitalization in a country where the language and customs are unique. Over twelve million international visitors came to Japan in 2015, and more are expected leading up to the Tokyo Olympics. One aspect of this is the potentially greater demand on healthcare services by foreign visitors. Nurses who take care of them have anxieties and concerns of their knowledge of the Japanese health system. Objectives: An effective distribution of travel-health information is vital for facilitating care for international visitors. Our research investigates whether a four-minute digital animation (Mari Info Japan), designed and developed by the authors and applied to a survey of 513 nurses who take care of foreigners daily, could clarify travel health procedures, reduce anxieties, while making it enjoyable to learn. Methodology: Respondents to a survey were divided into two groups. The intervention group watched Mari Info Japan. The control group read a standard guidebook. The participants were requested to fill a two-page questionnaire called Mari Meter-X, STAI-Y in English and mark a face scale, before and after the interventions. The questions dealt with knowledge of health promotion, the Japanese healthcare system, cultural concerns, anxieties, and attitudes in Japan. Data were collected from an intervention group (n=83) and control group (n=83) of nurses in a hospital, Japan for foreigners from February to March, 2016. We analyzed the data using Text Mining Studio for open-ended questions and JMP for statistical significance. Results: We found that the intervention group displayed more confidence and less anxiety to take care of foreign patients compared to the control group. The intervention group indicated a greater comfort after watching the animation. However, both groups were most likely to be concerned about language, the cost of medical expenses, informed consent, and choice of hospital. Conclusions: From the viewpoint of nurses, the provision of travel-health information by digital animation to international visitors to Japan was more effective than traditional methods as it helped them be better prepared to treat travel-related diseases and injury among international visitors. This study was registered number UMIN000020867. Funding: Grant–in-Aid for Challenging Exploratory Research 2010-2012 & 2014-16, Japanese Government.

Keywords: digital animation, health promotion, international visitor, Japan, nurse

Procedia PDF Downloads 298
109 Unveiling Adorno’s Concern for Revolutionary Praxis and Its Enduring Significance: A Philosophical Analysis of His Writings on Sociology and Philosophy

Authors: Marie-Josee Lavallee

Abstract:

Adorno’s reputation as an abstract and pessimistic thinker who indulged in a critic of capitalist society and culture without bothering himself with opening prospects for change, and who has no interest in political activism, recently begun to be questioned. This paper, which has a twofold objective, will push revisionist readings a step further by putting forward the thesis that revolutionary praxis has been an enduring concern for Adorno, surfacing throughout his entire work. On the other hand, it will hold that his understanding of the relationships between theory and praxis, which will be explained by referring to Ernst Bloch’s distinction between the warm and cold currents of Marxism, can help to interpret the paralysis of revolutionary practice in our own time under a new light. Philosophy and its tasks have been an enduring topic of Adorno’s work from the 1930s to Negativ Dialektik. The writings in which he develops these ideas stand among his most obscure and abstract so that their strong ties to the political have remained mainly overlooked. Adorno’s undertaking of criticizing and ‘redeeming’ philosophy and metaphysics is inseparable from a care for retrieving the capacity to act in the world and to change it. Philosophical problems are immanent to sociological problems, and vice versa, he underlines in his Metaphysik. Begriff and Problem. The issue of truth cannot be severed from the contingent context of a given idea. As a critical undertaking extracting its contents from reality, which is what philosophy should be from Adorno's perspective, the latter has the potential to fully reveal the reification of the individual and consciousness resulting from capitalist economic and cultural domination, thus opening the way to resistance and revolutionary change. While this project, according to his usual method, is sketched mainly in negative terms, it also exhibits positive contours which depict a socialist society. Only in the latter could human suffering end, and mutilated individuals experiment with reconciliation in an authentic way. That Adorno’s continuous plea for philosophy’s self-critic and renewal hides an enduring concern for revolutionary praxis emerges clearly from a careful philosophical analysis of his writings on philosophy and a selection of his sociological work, coupled with references to his correspondences. This study points to the necessity of a serious re-evaluation of Adorno’s relationship to the political, which will impact on the interpretation of his whole oeuvre, is much needed. In the second place, Adorno's dialectical conception of theory and praxis is enlightening for our own time, since it suggests that we are experiencing a phase of creative latency rather an insurmountable impasse.

Keywords: Frankfurt school, philosophy and revolution, revolutionary praxis, Theodor W. Adorno

Procedia PDF Downloads 111
108 Effect of Tooth Bleaching Agents on Enamel Demineralisation

Authors: Najlaa Yousef Qusti, Steven J. Brookes, Paul A. Brunton

Abstract:

Background: Tooth discoloration can be an aesthetic problem, and tooth whitening using carbamide peroxide bleaching agents are a popular treatment option. However, there are concerns about possible adverse effects such as demineralisation of the bleached enamel; however, the cause of this demineralisation is unclear. Introduction: Teeth can become stained or discoloured over time. Tooth whitening is an aesthetic solution for tooth discoloration. Bleaching solutions of 10% carbamide peroxide (CP) have become the standard agent used in dentist-prescribed and home-applied ’vital bleaching techniques’. These materials release hydrogen peroxide (H₂O₂), the active whitening agent. However, there is controversy in the literature regarding the effect of bleaching agents on enamel integrity and enamel mineral content. The purpose of this study was to establish if carbamide peroxide bleaching agents affect the acid solubility of enamel (i.e., make teeth more prone to demineralisation). Materials and Methods: Twelve human premolar teeth were sectioned longitudinally along the midline and varnished to leave the natural enamel surface exposed. The baseline behavior of each tooth half in relation to its demineralisation in acid was established by sequential exposure to 4 vials containing 1ml of 10mM acetic acid (1 minute/vial). This was followed by exposure to 10% CP for 8 hours. After washing in distilled water, the tooth half was sequentially exposed to 4 further vials containing acid to test if the acid susceptibility of the enamel had been affected. The corresponding tooth half acted as a control and was exposed to distilled water instead of CP. The mineral loss was determined by measuring [Ca²⁺] and [PO₄³⁻] released in each vial using a calcium ion-selective electrode and the phosphomolybdenum blue method, respectively. The effect of bleaching on the tooth surfaces was also examined using SEM. Results: Exposure to carbamide peroxide did not significantly alter the susceptibility of enamel to acid attack, and SEM of the enamel surface revealed a slight alteration in surface appearance. SEM images of the control enamel surface showed a flat enamel surface with some shallow pits, whereas the bleached enamel appeared with an increase in surface porosity and some areas of mild erosion. Conclusions: Exposure to H₂O₂ equivalent to 10% CP does not significantly increase subsequent acid susceptibility of enamel as determined by Ca²⁺ release from the enamel surface. The effects of bleaching on mineral loss were indistinguishable from distilled water in the experimental system used. However, some surface differences were observed by SEM. The phosphomolybdenum blue method for phosphate is compromised by peroxide bleaching agents due to their oxidising properties. However, the Ca²⁺ electrode is unaffected by oxidising agents and can be used to determine the mineral loss in the presence of peroxides.

Keywords: bleaching, carbamide peroxide, demineralisation, teeth whitening

Procedia PDF Downloads 114
107 Cardiac Arrest after Cardiac Surgery

Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov

Abstract:

Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.

Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients

Procedia PDF Downloads 43