Search results for: bring your own device
557 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network
Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour
Abstract:
Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network
Procedia PDF Downloads 171556 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation
Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin
Abstract:
CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model
Procedia PDF Downloads 309555 Nation Building versus Self Determination: Thai State’s Response to Insurgency in South
Authors: Sunaina Sunaina
Abstract:
The emergence of Thailand as a modern nation was amalgamation of several minority groups. Eventually, the nation tried to mitigate these diversities in the name of nationalism in the backdrop of colonial powers presence in neighboring nations. However, the continued imposition of modern nation building processes (which is a western concept) in the post-colonial era deepen the feelings of alienation among the minority groups and leads to separatist conflicts. It is significant that whatever form these conflicts take, will impact the security of nation as well as the region of Southeast Asia. This paper tries to explore the possible factors behind the state policies adopted by the government of Thailand to manage the insurgency in Southern provinces in the south. The protracted insurgency in the South has historical roots as Pattani kingdom had glorious period whether it was trade or commerce or education and its assimilation was never accepted by the leaders of these areas. But after assimilation of southern provinces in the state, it has been the state policy as an important factor in promoting or mitigating the insurgency. Initial protests from the elite class of southern provinces inflated into a more organized and violent uprising after Second World War. It was only the decade of 1990s that a relative peace could prevail for some time. The violence reemerged in 2004 with more intensity and till today this area is suffering with violence. Period of different Prime Ministers dealt this insurgency in different ways sometimes very hard line approach had been adopted especially under Primeminstership of Thaksin Shinawatra. Recently, the peace talks which were started during the period of Yinglunck Shinawatra and were carried forward by Junta government also halted. And again, the region stays in a very volatile state. Violence in these provinces not only questions the capability of government to provide political solution to the problem, but also emerges as a major threat to the internal security of the state. The current era where global terrorism is spreading fast, such vulnerable areas may work as a new ground for its proliferation in Southeast Asia. The paper attempts to understand how Thailand’s historical experience of security determines a different approach to national unity which limits the prospects for autonomy in the South. In conjunction with this experience it is nature of national politics and leadership that influences the nature of policies on the ground in Southern Thailand. The paper also tries to bring out conflict between state sovereignty and self-determination as demanded by many in the southern provinces.Keywords: insurgency, southern Thailand, security, nation building
Procedia PDF Downloads 126554 Optimizing the Design Parameters of Acoustic Power Transfer Model to Achieve High Power Intensity and Compact System
Authors: Ariba Siddiqui, Amber Khan
Abstract:
The need for bio-implantable devices in the field of medical sciences has been increasing day by day; however, the charging of these devices is a major issue. Batteries, a very common method of powering the implants, have a limited lifetime and bulky nature. Therefore, as a replacement of batteries, acoustic power transfer (APT) technology is being accepted as the most suitable technique to wirelessly power the medical implants in the present scenario. The basic model of APT consists of piezoelectric transducers that work on the principle of converse piezoelectric effect at the transmitting end and direct piezoelectric effect at the receiving end. This paper provides mechanistic insight into the parameters affecting the design and efficient working of acoustic power transfer systems. The optimum design considerations have been presented that will help to compress the size of the device and augment the intensity of the pressure wave. A COMSOL model of the PZT (Lead Zirconate Titanate) transducer was developed. The model was simulated and analyzed on a frequency spectrum. The simulation results displayed that the efficiency of these devices is strongly dependent on the frequency of operation, and a wrong choice of the operating frequency leads to the high absorption of acoustic field inside the tissue (medium), poor power strength, and heavy transducers, which in effect influence the overall configuration of the acoustic systems. Considering all the tradeoffs, the simulations were performed again by determining an optimum frequency (900 kHz) that resulted in the reduction of the transducer's thickness to 1.96 mm and augmented the power strength with an intensity of 432 W/m². Thus, the results obtained after the second simulation contribute to lesser attenuation, lightweight systems, high power intensity, and also comply with safety limits provided by the U.S Food and Drug Administration (FDA). It was also found that the chosen operating frequency enhances the directivity of the acoustic wave at the receiver side.Keywords: acoustic power, bio-implantable, COMSOL, Lead Zirconate Titanate, piezoelectric, transducer
Procedia PDF Downloads 174553 A Review on Development of Pedicle Screws and Characterization of Biomaterials for Fixation in Lumbar Spine
Authors: Shri Dubey, Jamal Ghorieshi
Abstract:
Instability of the lumbar spine is caused by various factors that include degenerative disc, herniated disc, traumatic injuries, and other disorders. Pedicle screws are widely used as a main fixation device to construct rigid linkages of vertebrae to provide a fully functional and stable spine. Various technologies and methods have been used to restore the stabilization. However, loosening of pedicle screws is the main cause of concerns for neurosurgeons. This could happen due to poor bone quality with osteoporosis as well as types of pedicle screw used. Compatibilities and stabilities of pedicle screws with bone depend on design (thread design, length, and diameter) and material. Grip length and pullout strength affect the motion and stability of the spine when it goes through different phases such as extension, flexion, and rotation. Pullout strength of augmented pedicle screws is increased in both primary and salvage procedures by 119% (p = 0.001) and 162% (p = 0.01), respectively. Self-centering pedicle screws at different trajectories (0°, 10°, 20°, and 30°) show the same pullout strength as insertion in a straight-ahead trajectory. The outer cylindrical and inner conical shape of pedicle screws show the highest pullout strength in Grades 5 and 15 foams (synthetic bone). An outer cylindrical and inner conical shape with a V-shape thread exhibit the highest pullout strength in all foam grades. The maximum observed pullout strength is at axial pullout configuration at 0°. For Grade 15 (240 kg/m³) foam, there is a decline in pull out strength. The largest decrease in pullout strength is reported for Grade 10 (160 kg/m³) foam. The maximum pullout strength of 2176 N (0.32-g/cm³ Sawbones) on all densities. Type 1 Pedicle screw shows the best fixation due to smaller conical core diameter and smaller thread pitch (Screw 2 with 2 mm; Screws 1 and 3 with 3 mm).Keywords: polymethylmethacrylate, PMMA, classical pedicle screws, CPS, expandable poly-ether-ether-ketone shell, EPEEKS, includes translaminar facet screw, TLFS, poly-ether-ether-ketone, PEEK, transfacetopedicular screw, TFPS
Procedia PDF Downloads 155552 Real Time Detection of Application Layer DDos Attack Using Log Based Collaborative Intrusion Detection System
Authors: Farheen Tabassum, Shoab Ahmed Khan
Abstract:
The brutality of attacks on networks and decisive infrastructures are on the climb over recent years and appears to continue to do so. Distributed Denial of service attack is the most prevalent and easy attack on the availability of a service due to the easy availability of large botnet computers at cheap price and the general lack of protection against these attacks. Application layer DDoS attack is DDoS attack that is targeted on wed server, application server or database server. These types of attacks are much more sophisticated and challenging as they get around most conventional network security devices because attack traffic often impersonate normal traffic and cannot be recognized by network layer anomalies. Conventional techniques of single-hosted security systems are becoming gradually less effective in the face of such complicated and synchronized multi-front attacks. In order to protect from such attacks and intrusion, corporation among all network devices is essential. To overcome this issue, a collaborative intrusion detection system (CIDS) is proposed in which multiple network devices share valuable information to identify attacks, as a single device might not be capable to sense any malevolent action on its own. So it helps us to take decision after analyzing the information collected from different sources. This novel attack detection technique helps to detect seemingly benign packets that target the availability of the critical infrastructure, and the proposed solution methodology shall enable the incident response teams to detect and react to DDoS attacks at the earliest stage to ensure that the uptime of the service remain unaffected. Experimental evaluation shows that the proposed collaborative detection approach is much more effective and efficient than the previous approaches.Keywords: Distributed Denial-of-Service (DDoS), Collaborative Intrusion Detection System (CIDS), Slowloris, OSSIM (Open Source Security Information Management tool), OSSEC HIDS
Procedia PDF Downloads 355551 Investigating Unplanned Applications and Admissions to Hospitals of Children with Cancer
Authors: Hacer Kobya Bulut, Ilknur Kahriman, Birsel C. Demirbag
Abstract:
Introduction and Purpose: The lives of children with cancer are affected by long term hospitalizations in a negative way due to complications arising from diagnosis or treatment. However, the children's parents are known to have difficulties in meeting their children’s needs and providing home care after cancer treatment or during remission process. Supporting these children and their parents by giving a planned discharge training starting from the hospital and home care leads to reducing hospital applications, hospitalizations, hospital costs, shortening the length of hospital stay and increasing the satisfaction of the children with cancer and their families. This study was conducted to investigate the status of children and their parents' unplanned application to hospital and re-hospitalization. Methods: The study was carried out with 65 children with hematological malignancy in 0-17 age group and their families in a hematology clinic and polyclinic of a university hospital in Trabzon. Data were collected with survey methodology between August-November, 2015 through face to face interview using numbers, percentage and chi-square test in the evaluation. Findings: Most of the children were leukemia (90.8%) and 49.2% had been ill over 13 months. Few of the parents (32.3%) stated that they had received discharge and home care training (24.6%) but most of them (69.2%) found themselves enough in providing home care. Very few parents (6.2%) received home care training after their children being discharged and the majority of parents (61.5%) faced difficulties in home care and had no one to call around them. The parents expressed that in providing care to their children with hematological malignance, they faced difficulty in feeding them (74.6%), explaining their disease (50.0%), giving their oral medication (47.5%), providing hygiene (43.5%) and providing oral care (39.3%). The question ‘What are the emergency situations in which you have to bring your children to a doctor immediately?' was replied as fever (89.2%), severe nausea and vomiting (87.7%), hemorrhage (86.2%) and pain (81.5%). The study showed that 50.8% of the children had unplanned applications to hospitals and 33.8% of them identified as unplanned hospitalization and the first causes of this were fever and pain. The study showed that the frequency of applications (%78.8) and hospitalizations (%81.8) was higher for boys and a statistically significant difference was found between gender and unplanned applications (X=4.779; p=0.02). Applications (48.5%) and hospitalizations (40.9%) were found lower for the parents who had received hospital discharge training, and a significant difference was determined between receiving training and unplanned hospitalizations (X=8.021; p=0.00). Similarly, applications (30.3%) and hospitalizations (40.9%) was found lower for the ones who had received home care training, and a significant difference was determined between receiving home care training and unplanned hospitalizations (X=4.758; p=0.02). Conclusion: It was found out that caregivers of children with cancer did not receive training related to home care and complications about treatment after discharging from hospital, so they faced difficulties in providing home care and this led to an increase in unplanned hospital applications and hospitalizations.Keywords: cancer, children, unplanned application, unplanned hospitalization
Procedia PDF Downloads 268550 Nickel Removal from Industrial Wastewater by Eucalyptus Leaves and Poplar Ashes
Authors: Negin Bayat, Nahid HasanZadeh
Abstract:
Effluents of different industries such as metalworking, battery industry, mining, including heavy metal are considered problematic issues for both humans and the environment. These heavy metals include cadmium, copper, zinc, nickel, chromium, cyanide, lead, etc. Different physicochemical and biological methods are used to remove heavy metals, such as sedimentation, coagulation, flotation, chemical precipitation, filtration, membrane processes (reverse osmosis and nanofiltration), ion exchange, biological methods, adsorption with activated carbon, etc. These methods are generally either expensive or ineffective. In recent years, considerable attention has been given to the removal of heavy metal ions from solution by absorption using discarded and low-cost materials. In this study, nickel removal using an adsorption process by eucalyptus powdered leaves and poplar ash was investigated. This is an applied study. The effect of various parameters on metal removal, such as pH, amount of adsorbent, contact time, and stirring speed, was studied using a discontinuous method. This research was conducted in aqueous solutions on the laboratory scale. Then, optimum absorption conditions were obtained. Then, the study was conducted on real wastewater samples. In addition, the nickel concentration in the wastewater before and after the absorption process was measured. In all experiments, the remaining nickel was measured using an atomic absorption spectrometry device at 382 nm wavelength after an appropriate time and filtration. The results showed that increasing both adsorbent and pH parameters increase the metal removal rate. Nickel removal increased at the first 60 minutes. Then, the absorption rate remained constant and reached equilibrium. A desired removal rate with 40 mg in 100 ml adsorbent solution at pH = 9.5 was observed. According to the obtained results, the best absorption rate was observed at 40 mg dose using a combination of eucalyptus leaves and poplar ash in this study, which was equal to 99.76%. Thus, this combined method can be used as an inexpensive and effective absorbent for the removal of nickel from aqueous solutions.Keywords: absorption, wastewater, nickel, poplar ash, eucalyptus leaf, treatment
Procedia PDF Downloads 22549 Determination of Rare Earth Element Patterns in Uranium Matrix for Nuclear Forensics Application: Method Development for Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Measurements
Authors: Bernadett Henn, Katalin Tálos, Éva Kováss Széles
Abstract:
During the last 50 years, the worldwide permeation of the nuclear techniques induces several new problems in the environmental and in the human life. Nowadays, due to the increasing of the risk of terrorism worldwide, the potential occurrence of terrorist attacks using also weapon of mass destruction containing radioactive or nuclear materials as e.g. dirty bombs, is a real threat. For instance, the uranium pellets are one of the potential nuclear materials which are suitable for making special weapons. The nuclear forensics mainly focuses on the determination of the origin of the confiscated or found nuclear and other radioactive materials, which could be used for making any radioactive dispersive device. One of the most important signatures in nuclear forensics to find the origin of the material is the determination of the rare earth element patterns (REE) in the seized or found radioactive or nuclear samples. The concentration and the normalized pattern of the REE can be used as an evidence of uranium origin. The REE are the fourteen Lanthanides in addition scandium and yttrium what are mostly found together and really low concentration in uranium pellets. The problems of the REE determination using ICP-MS technique are the uranium matrix (high concentration of uranium) and the interferences among Lanthanides. In this work, our aim was to develop an effective chemical sample preparation process using extraction chromatography for separation the uranium matrix and the rare earth elements from each other following some publications can be found in the literature and modified them. Secondly, our purpose was the optimization of the ICP-MS measuring process for REE concentration. During method development, in the first step, a REE model solution was used in two different types of extraction chromatographic resins (LN® and TRU®) and different acidic media for environmental testing the Lanthanides separation. Uranium matrix was added to the model solution and was proved in the same conditions. Methods were tested and validated using REE UOC (uranium ore concentrate) reference materials. Samples were analyzed by sector field mass spectrometer (ICP-SFMS).Keywords: extraction chromatography, nuclear forensics, rare earth elements, uranium
Procedia PDF Downloads 310548 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India
Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma
Abstract:
Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation
Procedia PDF Downloads 144547 ‘Doctor Knows Best’: Reconsidering Paternalism in the NICU
Authors: Rebecca Greenberg, Nipa Chauhan, Rashad Rehman
Abstract:
Paternalism, in its traditional form, seems largely incompatible with Western medicine. In contrast, Family-Centred Care, a partial response to historically authoritative paternalism, carries its own challenges, particularly when operationalized as family-directed care. Specifically, in neonatology, decision-making is left entirely to Substitute Decision Makers (most commonly parents). Most models of shared decision-making employ both the parents’ and medical team’s perspectives but do not recognize the inherent asymmetry of information and experience – asking parents to act like physicians to evaluate technical data and encourage physicians to refrain from strong medical opinions and proposals. They also do not fully appreciate the difficulties in adjudicating which perspective to prioritize and, moreover, how to mitigate disagreement. Introducing a mild form of paternalism can harness the unique skillset both parents and clinicians bring to shared decision-making and ultimately work towards decision-making in the best interest of the child. The notion expressed here is that within the model of shared decision-making, mild paternalism is prioritized inasmuch as optimal care is prioritized. This mild form of paternalism is known as Beneficent Paternalism and justifies our encouragement for physicians to root down in their own medical expertise to propose treatment plans informed by medical expertise, standards of care, and the parents’ values. This does not mean that we forget that paternalism was historically justified on ‘beneficent’ grounds; however, our recommendation is that a re-integration of mild paternalism is appropriate within our current Western healthcare climate. Through illustrative examples from the NICU, this paper explores the appropriateness and merits of Beneficent Paternalism and ultimately its use in promoting family-centered care, patient’s best interests and reducing moral distress. A distinctive feature of the NICU is the fact that communication regarding a patient’s treatment is exclusively done with substitute decision-makers and not the patient, i.e., the neonate themselves. This leaves the burden of responsibility entirely on substitute decision-makers and the clinical team; the patient in the NICU does not have any prior wishes, values, or beliefs that can guide decision-making on their behalf. Therefore, the wishes, values, and beliefs of the parent become the map upon which clinical proposals are made, giving extra weight to the family’s decision-making responsibility. This leads to why Family Directed Care is common in the NICU, where shared decision-making is mandatory. However, the zone of parental discretion is not as all-encompassing as it is currently considered; there are appropriate times when the clinical team should strongly root down in medical expertise and perhaps take the lead in guiding family decision-making: this is just what it means to adopt Beneficent Paternalism.Keywords: care, ethics, expertise, NICU, paternalism
Procedia PDF Downloads 146546 The Coexistence of Creativity and Information in Convergence Journalism: Pakistan's Evolving Media Landscape
Authors: Misha Mirza
Abstract:
In recent years, the definition of journalism in Pakistan has changed, so has the mindset of people and their approach towards a news story. For the audience, news has become more interesting than a drama or a film. This research thus provides an insight into Pakistan’s evolving media landscape. It tries not only to bring forth the outcomes of cross-platform cooperation among print and broadcast journalism but also gives an insight into the interactive data visualization techniques being used. The storytelling in journalism in Pakistan has evolved from depicting merely the truth to tweaking, fabricating and producing docu-dramas. It aims to look into how news is translated to a visual. Pakistan acquires a diverse cultural heritage and by engaging audience through media, this history translates into the storytelling platform today. The paper explains how journalists are thriving in a converging media environment and provides an analysis of the narratives in television talk shows today.’ Jack of all, master of none’ is being challenged by the journalists today. One has to be a quality information gatherer and an effective storyteller at the same time. Are journalists really looking more into what sells rather than what matters? Express Tribune is a very popular news platform among the youth. Not only is their newspaper more attractive than the competitors but also their style of narrative and interactive web stories lead to well-rounded news. Interviews are used as the basic methodology to get an insight into how data visualization is compassed. The quest for finding out the difference between visualization of information versus the visualization of knowledge has led the author to delve into the work of David McCandless in his book ‘Knowledge is beautiful’. Journalism in Pakistan has evolved from information to combining knowledge, infotainment and comedy. What is being criticized the most by the society most often becomes the breaking news. Circulation in today’s world is carried out in cultural and social networks. In recent times, we have come across many examples where people have gained overnight popularity by releasing songs with substandard lyrics or senseless videos perhaps because creativity has taken over information. This paper thus discusses the various platforms of convergence journalism from Pakistan’s perspective. The study concludes with proving how Pakistani pop culture Truck art is coexisting with all the platforms in convergent journalism. The changing media landscape thus challenges the basic rules of journalism. The slapstick humor and ‘jhatka’ in Pakistani talk shows has evolved from the Pakistani truck art poetry. Mobile journalism has taken over all the other mediums of journalism; however, the Pakistani culture coexists with the converging landscape.Keywords: convergence journalism in Pakistan, data visualization, interactive narrative in Pakistani news, mobile journalism, Pakistan's truck art culture
Procedia PDF Downloads 285545 Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems
Authors: L. Kiefer, C. Richter, G. Reinhart
Abstract:
The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification.Keywords: agent systems, autonomous control, handling systems, identification
Procedia PDF Downloads 177544 Practical Skill Education for Doctors in Training: Economical and Efficient Methods for Students to Receive Hands-on Experience
Authors: Nathaniel Deboever, Malcolm Breeze, Adrian Sheen
Abstract:
Basic surgical and suturing techniques are a fundamental requirement for all doctors. In order to gain confidence and competence, doctors in training need to obtain sufficient teaching and just as importantly: practice. Young doctors with an apt level of expertise on these simple surgical skills, which are often used in the Emergency Department, can help alleviate some pressure during a busy evening. Unfortunately, learning these skills can be quite difficult during medical school or even during junior doctor years. The aim of this project was to adequately train medical students attending University of Sydney’s Nepean Clinical School through a series of workshops highlighting practical skills, with hopes to further extend this program to junior doctors in the hospital. The sessions instructed basic skills via tutorials, demonstrations, and lastly, the sessions cemented these proficiencies with practical sessions. During such an endeavor, it is fundamental to employ models that appropriately resemble what students will encounter in the clinical setting. The sustainability of workshops is similarly important to the continuity of such a program. To address both these challenges, the authors have developed models including suturing platforms, knot tying, and vessel ligation stations, as well as a shave and punch biopsy models and ophthalmologic foreign body device. The unique aspect of this work is that we utilized hands-on teaching sessions, to address a gap in doctors-in-training and junior doctor curriculum. Presented to you through this poster are our approaches to creating models that do not employ animal products and therefore do not necessitate particular facilities or discarding requirements. Covering numerous skills that would be beneficial to all young doctors, these models are easily replicable and affordable. This exciting work allows for countless sessions at low cost, providing enough practice for students to perform these skills confidently as it has been shown through attendee questionnaires.Keywords: medical education, surgical models, surgical simulation, surgical skills education
Procedia PDF Downloads 157543 The Future of the Architect's Profession in France with the Emergence of Building Information Modelling
Authors: L. Mercier, D. Beladjine, K. Beddiar
Abstract:
The digital transition of building in France brings many changes which some have been able to face very quickly, while others are struggling to find their place and the interest that BIM can bring in their profession. BIM today is already adopted or initiated by construction professionals. However, this change, which can be drastic for some, prevents them from integrating it definitively. This is the case with architects. The profession is shared on the practice of BIM in its exercise. The risk of not adopting this new working method now and of not wanting to switch to its new digital tools leads us to question the future of the profession in view of the gap that is likely to be created within project management. In order to deal with the subject efficiently, our work was based on a documentary watch on BIM and then on the profession of architect, which allowed us to establish links on these two subjects. The observation of the economic model towards which the agencies tend and the trend of the sought after profiles made it possible to develop the opportunities and the brakes likely to impact the future of the profession of architect. The centralization of research directs work towards the conclusion that the model implemented by companies does not allow to integrate BIM within their structure. A solution hypothesis was then issued, focusing on the development of agencies through the diversity of profiles, skills to be integrated internally with the aim of diversifying their skills, and their business practices. In order to address this hypothesis of a multidisciplinary agency model, we conducted a survey of architectural firms. It is built on the model of Anglo-Saxon countries, which do not have the same functioning in comparison to the French model. The results obtained showed a risk of gradual disappearance on the market from small agencies in favor of those who will have and could take this BIM working method. This is why the architectural profession must, first of all, look at what is happening within its training before absolutely wanting to diversify the profiles to integrate into its structure. This directs the study on the training of architects. The schools of French architects are generally behind schedule if we allow the comparison to the schools of engineers. The latter is currently experiencing a slight improvement with the emergence of masters and BIM options during the university course. If the training of architects develops towards learning BIM and the agencies have the desire to integrate different but complementary profiles, then they will develop their skills internally and therefore open their profession to new functions. The place of BIM Management on projects will allow the architect to remain in control of the project because of their overall vision of the project. In addition, the integration of BIM and more generally of the life cycle analysis of the structure will make it possible to guarantee eco-design or eco-construction by approaching the constraints of sustainable development omnipresent on the planet.Keywords: building information modelling, BIM, BIM management, BIM manager, BIM architect
Procedia PDF Downloads 113542 Media Coverage on Child Sexual Abuse in Developing Countries
Authors: Hayam Qayyum
Abstract:
Print and Broadcast media are considered to be the most powerful social change agents and effective medium that can revolutionize the deter society into the civilized, responsible, composed society. Beside all major roles, imperative role of media is to highlight the human rights’ violation issues in order to provide awareness and to prevent society from the social evils and injustice. So, by pointing out the odds, media can lessen the magnitude of happenings within the society. For centuries, the “Silent Crime” i.e. Child Sexual Abuse (CSA) is gulping down the developing countries. This study will explore that how the appropriate Print and Broadcast media coverage can eliminate Child Sexual Abuse from the society. The immense challenge faced by the journalists today; is the accurate and ethical reporting and appropriate coverage to disclose the facts and deliver right message on the right time to lessen the social evils in the developing countries, by not harming the prestige of the victim. In case of CSA most of the victims and their families are not in favour to expose their children to media due to family norms and respect in the society. Media should focus on in depth information of CSA and use this coverage is to draw attention of the concern authorities to look into the matter for reforms and reviews in the system. Moreover, media as a change agent can bring such issue into the knowledge of the international community to make collective efforts with the affected country to eliminate the ‘Silent Crime’ from the society. The model country selected for this research paper is South Africa. The purpose of this research is not only to examine the existing reporting patterns and content of print and broadcast media coverage of South Africa but also aims to create awareness to eliminate Child Sexual abuse and indirectly to improve the condition of stake holders to overcome this social evil. The literature review method is used to formulate this paper. Trends of media content on CSA will be identified that how much amount and nature of information made available to the public through the media General view of media coverage on child sexual abuse in developing countries like India and Pakistan will also be focused. This research will be limited to the role of print and broadcast media coverage to eliminate child sexual abuse in South Africa. In developing countries, CSA issue needs to be addressed on immediate basis. The study will explore the CSA content of the most influential broadcast and print media outlets of South Africa. Broadcast media will be comprised of TV channels and print media will be comprised of influential newspapers. South Africa is selected as a model for this research paper.Keywords: child sexual abuse, developing countries, print and broadcast media, South Africa
Procedia PDF Downloads 581541 Dust Particle Removal from Air in a Self-Priming Submerged Venturi Scrubber
Authors: Manisha Bal, Remya Chinnamma Jose, B.C. Meikap
Abstract:
Dust particles suspended in air are a major source of air pollution. A self-priming submerged venturi scrubber proven very effective in cases of handling nuclear power plant accidents is an efficient device to remove dust particles from the air and thus aids in pollution control. Venturi scrubbers are compact, have a simple mode of operation, no moving parts, easy to install and maintain when compared to other pollution control devices and can handle high temperatures and corrosive and flammable gases and dust particles. In the present paper, fly ash particles recognized as a high air pollutant substance emitted mostly from thermal power plants is considered as the dust particle. Its exposure through skin contact, inhalation and indigestion can lead to health risks and in severe cases can even root to lung cancer. The main focus of this study is on the removal of fly ash particles from polluted air using a self-priming venturi scrubber in submerged conditions using water as the scrubbing liquid. The venturi scrubber comprising of three sections: converging section, throat and diverging section is submerged inside a water tank. The liquid enters the throat due to the pressure difference composed of the hydrostatic pressure of the liquid and static pressure of the gas. The high velocity dust particles atomize the liquid droplets at the throat and this interaction leads to its absorption into water and thus removal of fly ash from the air. Detailed investigation on the scrubbing of fly ash has been done in this literature. Experiments were conducted at different throat gas velocities, water levels and fly ash inlet concentrations to study the fly ash removal efficiency. From the experimental results, the highest fly ash removal efficiency of 99.78% is achieved at the throat gas velocity of 58 m/s, water level of height 0.77m with fly ash inlet concentration of 0.3 x10⁻³ kg/Nm³ in the submerged condition. The effect of throat gas velocity, water level and fly ash inlet concentration on the removal efficiency has also been evaluated. Furthermore, experimental results of removal efficiency are validated with the developed empirical model.Keywords: dust particles, fly ash, pollution control, self-priming venturi scrubber
Procedia PDF Downloads 165540 Micro-Droplet Formation in a Microchannel under the Effect of an Electric Field: Experiment
Authors: Sercan Altundemir, Pinar Eribol, A. Kerem Uguz
Abstract:
Microfluidics systems allow many-large scale laboratory applications to be miniaturized on a single device in order to reduce cost and advance fluid control. Moreover, such systems enable to generate and control droplets which have a significant role on improved analysis for many chemical and biological applications. For example, they can be employed as the model for cells in microfluidic systems. In this work, the interfacial instability of two immiscible Newtonian liquids flowing in a microchannel is investigated. When two immiscible liquids are in laminar regime, a flat interface is formed between them. If a direct current electric field is applied, the interface may deform, i.e. may become unstable and it may be ruptured and form micro-droplets. First, the effect of thickness ratio, total flow rate, viscosity ratio of the silicone oil and ethylene glycol liquid couple on the critical voltage at which the interface starts to destabilize is investigated. Then the droplet sizes are measured under the effect of these parameters at various voltages. Moreover, the effect of total flow rate on the time elapsed for the interface to be ruptured to form droplets by hitting the wall of the channel is analyzed. It is observed that an increase in the viscosity or the thickness ratio of the silicone oil to the ethylene glycol has a stabilizing effect, i.e. a higher voltage is needed while the total flow rate has no effect on it. However, it is observed that an increase in the total flow rate results in shortening of the elapsed time for the interface to hit the wall. Moreover, the droplet size decreases down to 0.1 μL with an increase in the applied voltage, the viscosity ratio or the total flow rate or a decrease in the thickness ratio. In addition to these observations, two empirical models for determining the critical electric number, i.e., the dimensionless voltage and the droplet size and another model which is a combination of both models, for determining the droplet size at the critical voltage are established.Keywords: droplet formation, electrohydrodynamics, microfluidics, two-phase flow
Procedia PDF Downloads 176539 Assessing Overall Thermal Conductance Value of Low-Rise Residential Home Exterior Above-Grade Walls Using Infrared Thermography Methods
Authors: Matthew D. Baffa
Abstract:
Infrared thermography is a non-destructive test method used to estimate surface temperatures based on the amount of electromagnetic energy radiated by building envelope components. These surface temperatures are indicators of various qualitative building envelope deficiencies such as locations and extent of heat loss, thermal bridging, damaged or missing thermal insulation, air leakage, and moisture presence in roof, floor, and wall assemblies. Although infrared thermography is commonly used for qualitative deficiency detection in buildings, this study assesses its use as a quantitative method to estimate the overall thermal conductance value (U-value) of the exterior above-grade walls of a study home. The overall U-value of exterior above-grade walls in a home provides useful insight into the energy consumption and thermal comfort of a home. Three methodologies from the literature were employed to estimate the overall U-value by equating conductive heat loss through the exterior above-grade walls to the sum of convective and radiant heat losses of the walls. Outdoor infrared thermography field measurements of the exterior above-grade wall surface and reflective temperatures and emissivity values for various components of the exterior above-grade wall assemblies were carried out during winter months at the study home using a basic thermal imager device. The overall U-values estimated from each methodology from the literature using the recorded field measurements were compared to the nominal exterior above-grade wall overall U-value calculated from materials and dimensions detailed in architectural drawings of the study home. The nominal overall U-value was validated through calendarization and weather normalization of utility bills for the study home as well as various estimated heat loss quantities from a HOT2000 computer model of the study home and other methods. Under ideal environmental conditions, the estimated overall U-values deviated from the nominal overall U-value between ±2% to ±33%. This study suggests infrared thermography can estimate the overall U-value of exterior above-grade walls in low-rise residential homes with a fair amount of accuracy.Keywords: emissivity, heat loss, infrared thermography, thermal conductance
Procedia PDF Downloads 313538 Ottoman Archaeology in Kostence (Constanta, Romania): A Locality on the Periphery of the Ottoman World
Authors: Margareta Simina Stanc, Aurel Mototolea, Tiberiu Potarniche
Abstract:
The city of Constanta (former Köstence) is located in the Dobrogea region, on the west shore of the Black Sea. Between 1420-1878, Dobrogea was a possession of the Ottoman Empire. Archaeological researches starting with the second half of the 20th century revealed various traces of the Ottoman period in this region. Between 2016-2018, preventive archaeological research conducted in the perimeter of the old Ottoman city of Köstence led to the discovery of structures of habitation as well as of numerous artifacts of the Ottoman period (pottery, coins, buckles, etc.). This study uses the analysis of these new discoveries to complete the picture of daily life in the Ottoman period. In 2017, in the peninsular area of Constanta, preventive archaeological research began at a point in the former Ottoman area. In the range between the current ironing level and the -1.5m depth, the Ottoman period materials appeared constantly. It is worth noting the structure of a large building that has been repaired at least once but could not be fully investigated. In parallel to this wall, there was arranged a transversally arranged brick-lined drainage channel. The drainage channel is poured into a tank (hazna), filled with various vintage materials, but mainly gilded ceramics and iron objects. This type of hazna is commonly found in Constanta for the pre-modern and modern period due to the lack of a sewage system in the peninsular area. A similar structure, probably fountain, was discovered in 2016 in another part of the old city. An interesting piece is that of a cup (probably) Persians and a bowl belonging to Kütahya style, both of the 17th century, proof of commercial routes passing through Constanta during that period and indirectly confirming the documentary testimonies of the time. Also, can be mentioned the discovery, in the year 2016, on the occasion of underwater research carried out by specialists of the department of the Constanta Museum, at a depth of 15 meters, a Turkish oil lamp (17th - the beginning of the 18th century), among other objects of a sunken ship. The archaeological pieces, in a fragmentary or integral state, found in research campaigns 2016-2018, are undergoing processing or restoration, leaving out all the available information, and establishing exact analogies. These discoveries bring new data to the knowledge of daily life during the Ottoman administration in the former Köstence, a locality on the periphery of the Islamic world.Keywords: habitation, material culture, Ottoman administration, Ottoman archaeology, periphery
Procedia PDF Downloads 131537 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng
Abstract:
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.Keywords: electrohysterogram, feature, preterm labor, term labor
Procedia PDF Downloads 572536 Magnetic Resonance Imaging in Cochlear Implant Patients without Magnet Removal: A Safe and Effective Workflow Management Program
Authors: Yunhe Chen, Xinyun Liu, Qian Wang, Jianan Li
Abstract:
Background Cochlear implants (CIs) are currently the primary effective treatment for severe or profound sensorineural hearing loss. As China's population ages and the number of young children rises, the demand for MRI for CI patients is expected to increase. Methods Reviewed MRI cases of 25 CI patients between 2015 and 2024, assessed imaging auditory outcomes and adverse reactions. Use the adverse event record sheet and accompanying medication sheet to record follow-up measures. Results Most CI patients undergoing MRI may face risks such as artifacts, pain, redness, swelling, tissue damage, bleeding, and magnet displacement or demagnetization. Twenty-five CI patients in our hospital were reviewed. Seven patient underwent 3.0 T MR, the others underwent 1.5 T MR. The manufacturers are 18 cases in Austria, 5 cases in Australia and 2 cases in Nurotron. Among them, one patient with bilateral CI underwent 1.5 T MR examination after head pressure bandaging, and the left magnet was displaced (CI24RE Series, Australia). This patient underwent surgical replacement of the magnet under general anesthesia. Six days after the operation, the patient's feedback indicated that the performance of the cochlear implant was consistent with the previous results following the reactivation of the external device. Based on the experience of our hospital, we proposed the feasible management scheme of MRI examination procedure for CI patients. This plan should include a module for confirming MRI imaging parameters, informed consent, educational materials for patients, and other safety measures to ensure that patients receive imaging results safely and effectively, implify clinical. Conclusion As indications for both MRI and cochlear implantation expand,the number of MRI studies recommended for patients with cochlear implants will also increase. The process and management scheme proposed in this study can help to obtain imaging results safely and effectively, and reduce clinical stress.Keywords: cochlear implantation, MRI, magnet, displacement
Procedia PDF Downloads 15535 The Debate over Dutch Universities: An Analysis of Stakeholder Perspectives
Authors: B. Bernabela, P. Bles, A. Bloecker, D. DeRock, M. van Es, M. Gerritse, T. de Jongh, W. Lansing, M. Martinot, J. van de Wetering
Abstract:
A heated debate has been taking place concerning research and teaching at Dutch universities for the last few years. The ministry of science and education has published reports on its strategy to improve university curricula and position the Netherlands as a globally competitive knowledge economy. These reports have provoked an uproar of responses from think tanks, concerned academics, and the media. At the center of the debate is disagreement over who should determine the Dutch university curricula and how these curricula should look. Many stakeholders in the higher education system have voiced their opinion, and some have not been heard. The result is that the diversity of visions is ignored or taken for granted in the official reports. Recognizing this gap in stakeholder analysis, the aim of this paper is to bring attention to the wide range of perspectives on who should be responsible for designing higher education curricula. Based on a previous analysis by the Rathenau Institute, we distinguish five different groups of stakeholders: government, business sector, university faculty and administration, students, and the societal sector. We conducted semi-structured, in-depth interviews with representatives from each stakeholder group, and distributed quantitative questionnaires to people in the societal sector (i.e. people not directly affiliated with universities or graduates). Preliminary data suggests that the stakeholders have different target points concerning the university curricula. Representatives from the governmental sector tend to place special emphasis on the link between research and education, while representatives from the business sector rather focus on greater opportunities for students to obtain practical experience in the job market. Responses from students reflect a belief that they should be able to influence the curriculum in order to compete with other students on the international job market. On the other hand, university faculty expresses concern that focusing on the labor market puts undue pressure on students and compromises the quality of education. Interestingly, the opinions of members of ‘society’ seem to be relatively unchanged by political and economic shifts. Following a comprehensive analysis of the data, we believe that our results will make a significant contribution to the debate on university education in the Netherlands. These results should be regarded as a foundation for further research concerning the direction of Dutch higher education, for only if we take into account the different opinions and views of the various stakeholders can we decide which steps to take. Moreover, the Dutch experience offers lessons to other countries as well. As the internationalization of higher education is occurring faster than ever before, universities throughout Europe and globally are experiencing many of the same pressures.Keywords: Dutch University curriculum, higher education, participants’ opinions, stakeholder perspectives
Procedia PDF Downloads 344534 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 285533 Oral Versus Iontophoresis Nonsteroidal Anti-Inflammatory Drugs in Tennis Elbow
Authors: Moustafa Ali Elwan, Ibrahim Salem Abdelrafa, Ashraf Moharm
Abstract:
Nonsteroidal anti-inflammatory drugs (NSAIDs) are among the most commonly prescribed oral and topical drugs worldwide. Moreover, NSAIDs are responsible for most of all adverse drug reactions. For several decades, there are numerous attempts to use the cutaneous layers as a gate into the body for the local delivery of the therapeutic agent. Transdermal drug delivery is a validated technology contributing significantly to global pharmaceutical care. Transdermal Drug Delivery systems can be improved by using therapeutic agents. Moreover, Transdermal Drug Delivery systems can be improved by using chemical enhancers like ultrasound or iontophoresis. Iontophoresis provides a mechanism to enhance the penetration of hydrophilic and charged molecules across the skin. Objective: to compare the drug administration by ‘iontophoresis’ versus the oral rule. Methods: This study was conducted at the Faculty of Physical Therapy, Modern University for technology and information, Cairo, Egypt, on 20 participants (8 female & 12 male) who complained of tennis elbow. Their mean age was (25.45 ± 3.98) years, and all participants were assessed in many aspects: Pain threshold was assessed by algometer. Range of motion was assessed by electro goniometer, and isometric strength was assessed by a portable hand-held dynamometer. Then Participants were randomly assigned into two groups: group A was treated with oral NSAID (diclofenac) while group B was treated via administration of NSAIDs (diclofenac) via an iontophoresis device. All the participants were subjected to blood samples analysis in both pre-administration of the drug and post-administration of the drug for 24 hours (sample/every 6 hours). Results: The results demonstrated that there was a significant improvement in group b, “iontophoresis NSAIDs group,” more than in group B,” oral NSAIDs group,” in all measurements ‘ pain threshold, strength, and range of motion. Also, the iontophoresis method shows higher maximum plasma concentrations (Cmax) and concentration-time curves than the oral method.Keywords: diclofenac, iontophoresis, NSAIDs, oral, tennis elbow
Procedia PDF Downloads 115532 Cerebral Pulsatility Mediates the Link Between Physical Activity and Executive Functions in Older Adults with Cardiovascular Risk Factors: A Longitudinal NIRS Study
Authors: Hanieh Mohammadi, Sarah Fraser, Anil Nigam, Frederic Lesage, Louis Bherer
Abstract:
A chronically higher cerebral pulsatility is thought to damage cerebral microcirculation, leading to cognitive decline in older adults. Although it is widely known that regular physical activity is linked to improvement in some cognitive domains, including executive functions, the mediating role of cerebral pulsatility on this link remains to be elucidated. This study assessed the impact of 6 months of regular physical activity upon changes in an optical index of cerebral pulsatility and the role of physical activity for the improvement of executive functions. 27 older adults (aged 57-79, 66.7% women) with cardiovascular risk factors (CVRF) were enrolled in the study. The participants completed the behavioral Stroop test, which was extracted from the Delis-Kaplan executive functions system battery at baseline (T0) and after 6 months (T6) of physical activity. Near-infrared spectroscopy (NIRS) was applied for an innovative approach to indexing cerebral pulsatility in the brain microcirculation at T0 and T6. The participants were at standing rest while a NIRS device recorded hemodynamics data from frontal and motor cortex subregions at T0 and T6. The cerebral pulsatility index of interest was cerebral pulse amplitude, which was extracted from the pulsatile component of NIRS data. Our data indicated that 6 months of physical activity was associated with a reduction in the response time for the executive functions, including inhibition (T0: 56.33± 18.2 to T6: 53.33± 15.7,p= 0.038)and Switching(T0: 63.05± 5.68 to T6: 57.96 ±7.19,p< 0.001) conditions of the Stroop test. Also, physical activity was associated with a reduction in cerebral pulse amplitude (T0: 0.62± 0.05 to T6: 0.55± 0.08, p < 0.001). Notably, cerebral pulse amplitude was a significant mediator of the link between physical activity and response to the Stroop test for both inhibition (β=0.33 (0.61,0.23),p< 0.05)and switching (β=0.42 (0.69,0.11),p <0.01) conditions. This study suggests that regular physical activity may support cognitive functions through the improvement of cerebral pulsatility in older adults with CVRF.Keywords: near-infrared spectroscopy, cerebral pulsatility, physical activity, cardiovascular risk factors, executive functions
Procedia PDF Downloads 195531 From Text to Data: Sentiment Analysis of Presidential Election Political Forums
Authors: Sergio V Davalos, Alison L. Watkins
Abstract:
User generated content (UGC) such as website post has data associated with it: time of the post, gender, location, type of device, and number of words. The text entered in user generated content (UGC) can provide a valuable dimension for analysis. In this research, each user post is treated as a collection of terms (words). In addition to the number of words per post, the frequency of each term is determined by post and by the sum of occurrences in all posts. This research focuses on one specific aspect of UGC: sentiment. Sentiment analysis (SA) was applied to the content (user posts) of two sets of political forums related to the US presidential elections for 2012 and 2016. Sentiment analysis results in deriving data from the text. This enables the subsequent application of data analytic methods. The SASA (SAIL/SAI Sentiment Analyzer) model was used for sentiment analysis. The application of SASA resulted with a sentiment score for each post. Based on the sentiment scores for the posts there are significant differences between the content and sentiment of the two sets for the 2012 and 2016 presidential election forums. In the 2012 forums, 38% of the forums started with positive sentiment and 16% with negative sentiment. In the 2016 forums, 29% started with positive sentiment and 15% with negative sentiment. There also were changes in sentiment over time. For both elections as the election got closer, the cumulative sentiment score became negative. The candidate who won each election was in the more posts than the losing candidates. In the case of Trump, there were more negative posts than Clinton’s highest number of posts which were positive. KNIME topic modeling was used to derive topics from the posts. There were also changes in topics and keyword emphasis over time. Initially, the political parties were the most referenced and as the election got closer the emphasis changed to the candidates. The performance of the SASA method proved to predict sentiment better than four other methods in Sentibench. The research resulted in deriving sentiment data from text. In combination with other data, the sentiment data provided insight and discovery about user sentiment in the US presidential elections for 2012 and 2016.Keywords: sentiment analysis, text mining, user generated content, US presidential elections
Procedia PDF Downloads 192530 Wearable System for Prolonged Cooling and Dehumidifying of PPE in Hot Environments
Abstract:
While personal protective equipment (PPE) prevents the healthcare personnel from exposing to harmful surroundings, it creates a barrier to the dissipation of body heat and perspiration, leading to severe heat stress during prolonged exposure, especially in hot environments. It has been found that most of the existed personal cooling strategies have limitations in achieving effective cooling performance with long duration and lightweight. This work aimed to develop a lightweight (<1.0 kg) and less expensive wearable air cooling and dehumidifying system (WCDS) that can be applied underneath the protective clothing and provide 50W mean cooling power for more than 5 hours at 35°C environmental temperature without compromising the protection of PPE. For the WCDS, blowers will be used to activate an internal air circulation inside the clothing microclimate, which doesn't interfere with the protection of PPE. An air cooling and dehumidifying chamber (ACMR) with a specific design will be developed to reduce the air temperature and humidity inside the protective clothing. Then the cooled and dried air will be supplied to upper chest and back areas through a branching tubing system for personal cooling. A detachable ice cooling unit will be applied from the outside of the PPE to extract heat from the clothing microclimate. This combination allows for convenient replacement of the cooling unit to refresh the cooling effect, which can realize a continuous cooling function without taking off the PPE or adding too much weight. A preliminary thermal manikin test showed that the WCDS was able to reduce the microclimate temperature inside the PPE averagely by about 8°C for 60 minutes when the environmental temperature was 28.0 °C and 33.5 °C, respectively. Replacing the ice cooling unit every hour can maintain this cooling effect, while the longest operation duration is determined by the battery of the blowers, which can last for about 6 hours. This unique design is especially helpful for the PPE users, such as health care workers in infectious and hot environments when continuous cooling and dehumidifying are needed, but the change of protective clothing may increase the risk of infection. The new WCDS will not only improve the thermal comfort of PPE users but can also extend their safe working duration.Keywords: personal thermal management, heat stress, ppe, health care workers, wearable device
Procedia PDF Downloads 80529 On Cloud Computing: A Review of the Features
Authors: Assem Abdel Hamed Mousa
Abstract:
The Internet of Things probably already influences your life. And if it doesn’t, it soon will, say computer scientists; Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives. Alan Kay of Apple calls this "Third Paradigm" computing. Ubiquitous computing is essentially the term for human interaction with computers in virtually everything. Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people. Virtual reality is primarily a horse power problem; ubiquitous computing is a very difficult integration of human factors, computer science, engineering, and social sciences. The approach: Activate the world. Provide hundreds of wireless computing devices per person per office, of all scales (from 1" displays to wall sized). This has required new work in operating systems, user interfaces, networks, wireless, displays, and many other areas. We call our work "ubiquitous computing". This is different from PDA's, dynabooks, or information at your fingertips. It is invisible; everywhere computing that does not live on a personal device of any sort, but is in the woodwork everywhere. The initial incarnation of ubiquitous computing was in the form of "tabs", "pads", and "boards" built at Xerox PARC, 1988-1994. Several papers describe this work, and there are web pages for the Tabs and for the Boards (which are a commercial product now): Ubiquitous computing will drastically reduce the cost of digital devices and tasks for the average consumer. With labor intensive components such as processors and hard drives stored in the remote data centers powering the cloud , and with pooled resources giving individual consumers the benefits of economies of scale, monthly fees similar to a cable bill for services that feed into a consumer’s phone.Keywords: internet, cloud computing, ubiquitous computing, big data
Procedia PDF Downloads 384528 Postmortem Magnetic Resonance Imaging as an Objective Method for the Differential Diagnosis of a Stillborn and a Neonatal Death
Authors: Uliana N. Tumanova, Sergey M. Voevodin, Veronica A. Sinitsyna, Alexandr I. Shchegolev
Abstract:
An important part of forensic and autopsy research in perinatology is the answer to the question of life and stillbirth. Postmortem magnetic resonance imaging (MRI) is an objective non-invasive research method that allows to store data for a long time and not to exhume the body to clarify the diagnosis. The purpose of the research is to study the possibilities of a postmortem MRI to determine the stillbirth and death of a newborn who had spontaneous breathing and died on the first day after birth. MRI and morphological data of a study of 23 stillborn bodies, prenatally dead at a gestational age of 22-39 weeks (Group I) and the bodies of 16 newborns who died from 2 to 24 hours after birth (Group II) were compared. Before the autopsy, postmortem MRI was performed on the Siemens Magnetom Verio 3T device in the supine position of the body. The control group for MRI studies consisted of 7 live newborns without lung disease (Group III). On T2WI in the sagittal projection was measured MR-signal intensity (SI) in the lung tissue (L) and shoulder muscle (M). During the autopsy, a pulmonary swimming test was evaluated, and macro- and microscopic studies were performed. According to the postmortem MRI, the highest values of mean SI of the lung (430 ± 27.99) and of the muscle (405.5 ± 38.62) on T2WI were detected in group I and exceeded the corresponding value of group II by 2.7 times. The lowest values were found in the control group - 77.9 ± 12.34 and 119.7 ± 6.3, respectively. In the group II, the lung SI was 1.6 times higher than the muscle SI, whereas in the group I and in the control group, the muscle SI was 2.1 times and 1.8 times larger than the lung. On the basis of clinical and morphological data, we calculated the formula for determining the breathing index (BI) during postmortem MRI: BI = SIL x SIM / 100. The mean value of BI in the group I (1801.14 ± 241.6) (values ranged from 756 to 3744) significantly higher than the corresponding average value of BI in the group II (455.89 ± 137.32, p < 0.05) (305-638.4). In the control group, the mean BI value was 91.75 ± 13.3 (values ranged from 53 to 154). The BI with the results of pulmonary swimming tests and microscopic examination of the lungs were compared. The boundary value of BI for the differential diagnosis of stillborn and newborn death was 700. Using the postmortem MRI allows to differentiate the stillborn with the death of the breathing newborn.Keywords: lung, newborn, postmortem MRI, stillborn
Procedia PDF Downloads 128