Search results for: spatial features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5967

Search results for: spatial features

897 Evaluation of Rheological Properties, Anisotropic Shrinkage, and Heterogeneous Densification of Ceramic Materials during Liquid Phase Sintering by Numerical-Experimental Procedure

Authors: Hamed Yaghoubi, Esmaeil Salahi, Fateme Taati

Abstract:

The effective shear and bulk viscosity, as well as dynamic viscosity, describe the rheological properties of the ceramic body during the liquid phase sintering process. The rheological parameters depend on the physical and thermomechanical characteristics of the material such as relative density, temperature, grain size, and diffusion coefficient and activation energy. The main goal of this research is to acquire a comprehensive understanding of the response of an incompressible viscose ceramic material during liquid phase sintering process such as stress-strain relations, sintering and hydrostatic stress, the prediction of anisotropic shrinkage and heterogeneous densification as a function of sintering time by including the simultaneous influence of gravity field, and frictional force. After raw materials analysis, the standard hard porcelain mixture as a ceramic body was designed and prepared. Three different experimental configurations were designed including midpoint deflection, sinter bending, and free sintering samples. The numerical method for the ceramic specimens during the liquid phase sintering process are implemented in the CREEP user subroutine code in ABAQUS. The numerical-experimental procedure shows the anisotropic behavior, the complete difference in spatial displacement through three directions, the incompressibility for ceramic samples during the sintering process. The anisotropic shrinkage factor has been proposed to investigate the shrinkage anisotropy. It has been shown that the shrinkage along the normal axis of casting sample is about 1.5 times larger than that of casting direction, the gravitational force in pyroplastic deformation intensifies the shrinkage anisotropy more than the free sintering sample. The lowest and greatest equivalent creep strain occurs at the intermediate zone and around the central line of the midpoint distorted sample, respectively. In the sinter bending test sample, the equivalent creep strain approaches to the maximum near the contact area with refractory support. The inhomogeneity in Von-Misses, pressure, and principal stress intensifies the relative density non-uniformity in all samples, except in free sintering one. The symmetrical distribution of stress around the center of free sintering sample, cause to hinder the pyroplastic deformations. Densification results confirmed that the effective bulk viscosity was well-defined with relative density values. The stress analysis confirmed that the sintering stress is more than the hydrostatic stress from start to end of sintering time so, from both theoretically and experimentally point of view, the sintering process occurs completely.

Keywords: anisotropic shrinkage, ceramic material, liquid phase sintering process, rheological properties, numerical-experimental procedure

Procedia PDF Downloads 335
896 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 309
895 The Influence of Partial Replacement of Hydrated Lime by Pozzolans on Properties of Lime Mortars

Authors: Przemyslaw Brzyski, Stanislaw Fic

Abstract:

Hydrated lime, because of the life cycle (return to its natural form as a result of the setting and hardening) has a positive environmental impact. The lime binder is used in mortars. Lime is a slow setting binder with low mechanical properties. The aim of the study was to evaluate the possibility of improving the properties of the lime binder by using different pozzolanic materials as partial replacement of hydrated lime binder. Pozzolan materials are the natural or industrial waste, so do not affect the environmental impact of the lime binder. The following laboratory tests were performed: the analysis of the physical characteristics of the tested samples of lime mortars (bulk density, porosity), flexural and compressive strength, water absorption and the capillary rise of samples and consistency of fresh mortars. As a partial replacement of hydrated lime (in the amount of 10%, 20%, 30% by weight of lime) a metakaolin, silica fume, and zeolite were used. The shortest setting and hardening time showed mortars with the addition of metakaolin. All additives noticeably improved strength characteristic of lime mortars. With the increase in the amount of additive, the increase in strength was also observed. The highest flexural strength was obtained by using the addition of metakaolin in an amount of 20% by weight of lime (2.08 MPa). The highest compressive strength was obtained by using also the addition of metakaolin but in an amount of 30% by weight of lime (9.43 MPa). The addition of pozzolan caused an increase in the mortar tightness which contributed to the limitation of absorbability. Due to the different surface area, pozzolanic additives affected the consistency of fresh mortars. Initial consistency was assumed as plastic. Only the addition of silica fume an amount of 20 and 30% by weight of lime changed the consistency to the thick-plastic. The conducted study demonstrated the possibility of applying lime mortar with satisfactory properties. The features of lime mortars do not differ significantly from cement-based mortar properties and show a lower environmental impact due to CO₂ absorption during lime hardening. Taking into consideration the setting time, strength and consistency, the best results can be obtained with metakaolin addition to the lime mortar.

Keywords: lime, binder, mortar, pozzolan, properties

Procedia PDF Downloads 188
894 Chinese Acupuncture: A Potential Treatment for Autism Rat Model via Improving Synaptic Function

Authors: Sijie Chen, Xiaofang Chen, Juan Wang, Yingying Zhang, Yu Hong, Wanyu Zhuang, Xinxin Huang, Ping Ou, Longsheng Huang

Abstract:

Purpose: Autistic symptom improvement can be observed in children treated with acupuncture, but the mechanism is still being explored. In the present study, we used scalp acupuncture to treat autism rat model, and then their improvement in the abnormal behaviors and specific mechanisms behind were revealed by detecting animal behaviors, analyzing the RNA sequencing of the prefrontal cortex(PFC), and observing the ultrastructure of PFC neurons under the transmission electron microscope. Methods: On gestational day 12.5, Wistar rats were given valproic acid (VPA) by intraperitoneal injection, and their offspring were considered to be reliable rat models of autism. They were randomized to VPA or VPA-acupuncture group (n=8). Offspring of Wistar pregnant rats that were simultaneously injected with saline were randomly selected as the wild-type group (WT). VPA_acupuncture group rats received acupuncture intervention at 23 days of age for 4 weeks, and the other two groups followed without intervention. After the intervention, all experimental rats underwent behavioral tests. Immediately afterward, they were euthanized by cervical dislocation, and their prefrontal cortex was isolated for RNA sequencing and transmission electron microscopy. Results: The main results are as follows: 1. Animal behavioural tests: VPA group rats showed more anxiety-like behaviour and repetitive, stereotyped behaviour than WT group rats. While VPA group rats showed less spatial exploration ability, activity level, social interaction, and social novelty preference than WT group rats. It was gratifying to observe that acupuncture indeed improved these abnormal behaviors of autism rat model. 2. RNA-sequencing: The three groups of rats differed in the expression and enrichment pathways of multiple genes related to synaptic function, neural signal transduction, and circadian rhythm regulation. Our experiments indicated that acupuncture can alleviate the major symptoms of ASD by improving these neurological abnormalities. 3. Under the transmission electron microscopy, several lysosomes and mitochondrial structural abnormalities were observed in the prefrontal neurons of VPA group rats, which were manifested as atrophy of the mitochondrial membran, blurring or disappearance of the mitochondrial cristae, and even vacuolization. Moreover, the number of synapses and synaptic vesicles was relatively small. Conversely, the mitochondrial structure of rats in the WT group and VPA_acupuncture was normal, and the number of synapses and synaptic vesicles was relatively large. Conclusion: Acupuncture effectively improved the abnormal behaviors of autism rat model and the ultrastructure of the PFC neurons, which might worked by improving their abnormal synaptic function, synaptic plasticity and promoting neuronal signal transduction.

Keywords: autism spectrum disorder, acupuncture, animal behavior, RNA sequencing, transmission electron microscope

Procedia PDF Downloads 33
893 Literature Review of Rare Synchronous Tumours

Authors: Diwei Lin, Amanda Tan, Rajinder Singh-Rai

Abstract:

We present the first reported case of a concomitant Leydig cell tumor (LCT) and paratesticular leiomyoma in an adult male with a known history of bilateral cryptorchidism. An 80-year-old male presented with a 2-month history of a left testicular lump associated with mild discomfort and a gradual increase in size on a background of bilateral cryptorchidism requiring multiple orchidopexy procedures as a child. Ultrasound confirmed a lesion suspicious for malignancy and he proceeded to a left radical orchidectomy. Histopathological assessment of the left testis revealed a concomitant testicular LCT with malignant features and paratesticular leiomyoma. Leydig cell tumors (LCTs) are the most common pure testicular sex cord-stromal tumors, accounting for up to 3% of all testicular tumors. They can occur at almost any age, but are noted to have a bi-modal distribution, with a peak incidence at 6 to 10 and at 20 to 50 years of age. LCT’s are often hormonally active and can lead to feminizing or virilizing syndromes. LCT’s are usually regarded as benign but can rarely exhibit malignant traits. Paratesticular tumours are uncommon and their reported prevalence varies between 3% and 16%. They occur in a complex anatomical area which includes the contents of the spermatic cord, testicular tunics, epididymis and vestigial remnants. Up to 90% of paratesticular tumours are believed to originate from the spermatic cord, though it is often difficult to definitively ascertain the exact site of origin. Although any type of soft-tissue neoplasm can be found in the paratesticular region, the most common benign tumors reported are lipomas of the spermatic cord, adenomatoid tumours of the epididymis and leiomyomas of the testis. Genetic studies have identified potential mutations that could potentially cause LCTs, but there are no known associations between concomitant LCTs and paratesticular tumors. The presence of cryptorchidism in adults with both LCTs and paratesticular neoplasms individually has been previously reported and it appears intuitive that cryptorchidism is likely to be associated with the concomitant presentation in this case report. This report represents the first documented case in the literature of a unilateral concomitant LCT and paratesticular leiomyoma on a background of bilateral cryptorchidism.

Keywords: testicular cancer, leydig cell tumour, leiomyoma, paratesticular neoplasms

Procedia PDF Downloads 356
892 Urban Growth and Its Impact on Natural Environment: A Geospatial Analysis of North Part of the UAE

Authors: Mohamed Bualhamam

Abstract:

Due to the complex nature of tourism resources of the Northern part of the United Arab Emirates (UAE), the potential of Geographical Information Systems (GIS) and Remote Sensing (RS) in resolving these issues was used. The study was an attempt to use existing GIS data layers to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth and give some specific recommendations to protect the area. By identifying sensitive natural environment and archaeological heritage resources, public agencies and citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas. The paper concludes that applications of GIS and RS in study of urban growth impact in tourism resources are a strong and effective tool that can aid in tourism planning and decision-making. The study area is one of the fastest growing regions in the country. The increase in population along the region, as well as rapid growth of towns, has increased the threat to natural resources and archeological sites. Satellite remote sensing data have been proven useful in assessing the natural resources and in monitoring the changes. The study used GIS and RS to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth. The result of GIS analyses shows that the Northern part of the UAE has variety for tourism resources, which can use for future tourism development. Rapid urban development in the form of small towns and different economic activities are showing in different places in the study area. The urban development extended out of old towns and have negative affected of sensitive tourism resources in some areas. Tourism resources for the Northern part of the UAE is a highly complex resources, and thus requires tools that aid in effective decision making to come to terms with the competing economic, social, and environmental demands of sustainable development. The UAE government should prepare a tourism databases and a GIS system, so that planners can be accessed for archaeological heritage information as part of development planning processes. Applications of GIS in urban planning, tourism and recreation planning illustrate that GIS is a strong and effective tool that can aid in tourism planning and decision- making. The power of GIS lies not only in the ability to visualize spatial relationships, but also beyond the space to a holistic view of the world with its many interconnected components and complex relationships. The worst of the damage could have been avoided by recognizing suitable limits and adhering to some simple environmental guidelines and standards will successfully develop tourism in sustainable manner. By identifying sensitive natural environment and archaeological heritage resources of the Northern part of the UAE, public agencies and private citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas.

Keywords: GIS, natural environment, UAE, urban growth

Procedia PDF Downloads 253
891 Rapid Detection of the Etiology of Infection as Bacterial or Viral Using Infrared Spectroscopy of White Blood Cells

Authors: Uraib Sharaha, Guy Beck, Joseph Kapelushnik, Adam H. Agbaria, Itshak Lapidot, Shaul Mordechai, Ahmad Salman, Mahmoud Huleihel

Abstract:

Infectious diseases cause a significant burden on the public health and the economic stability of societies all over the world for several centuries. A reliable detection of the causative agent of infection is not possible based on clinical features, since some of these infections have similar symptoms, including fever, sneezing, inflammation, vomiting, diarrhea, and fatigue. Moreover, physicians usually encounter difficulties in distinguishing between viral and bacterial infections based on symptoms. Therefore, there is an ongoing need for sensitive, specific, and rapid methods for identification of the etiology of the infection. This intricate issue perplex doctors and researchers since it has serious repercussions. In this study, we evaluated the potential of the mid-infrared spectroscopic method for rapid and reliable identification of bacterial and viral infections based on simple peripheral blood samples. Fourier transform infrared (FTIR) spectroscopy is considered a successful diagnostic method in the biological and medical fields. Many studies confirmed the great potential of the combination of FTIR spectroscopy and machine learning as a powerful diagnostic tool in medicine since it is a very sensitive method, which can detect and monitor the molecular and biochemical changes in biological samples. We believed that this method would play a major role in improving the health situation, raising the level of health in the community, and reducing the economic burdens in the health sector resulting from the indiscriminate use of antibiotics. We collected peripheral blood samples from young 364 patients, of which 93 were controls, 126 had bacterial infections, and 145 had viral infections, with ages lower than18 years old, limited to those who were diagnosed with fever-producing illness. Our preliminary results showed that it is possible to determine the infectious agent with high success rates of 82% for sensitivity and 80% for specificity, based on the WBC data.

Keywords: infectious diseases, (FTIR) spectroscopy, viral infections, bacterial infections.

Procedia PDF Downloads 126
890 Implementation of Quality Function Development to Incorporate Customer’s Value in the Conceptual Design Stage of a Construction Projects

Authors: Ayedh Alqahtani

Abstract:

Many construction firms in Saudi Arabia dedicated to building projects agree that the most important factor in the real estate market is the value that they can give to their customer. These firms understand the value of their client in different ways. Value can be defined as the size of the building project in relationship to the cost or the design quality of the materials utilized in finish work or any other features of building rooms such as the bathroom. Value can also be understood as something suitable for the money the client is investing for the new property. A quality tool is required to support companies to achieve a solution for the building project and to understand and manage the customer’s needs. Quality Function Development (QFD) method will be able to play this role since the main difference between QFD and other conventional quality management tools is QFD a valuable and very flexible tool for design and taking into the account the VOC. Currently, organizations and agencies are seeking suitable models able to deal better with uncertainty, and that is flexible and easy to use. The primary aim of this research project is to incorporate customer’s requirements in the conceptual design of construction projects. Towards this goal, QFD is selected due to its capability to integrate the design requirements to meet the customer’s needs. To develop QFD, this research focused upon the contribution of the different (significantly weighted) input factors that represent the main variables influencing QFD and subsequent analysis of the techniques used to measure them. First of all, this research will review the literature to determine the current practice of QFD in construction projects. Then, the researcher will review the literature to define the current customers of residential projects and gather information on customers’ requirements for the design of the residential building. After that, qualitative survey research will be conducted to rank customer’s needs and provide the views of stakeholder practitioners about how these needs can affect their satisfy. Moreover, a qualitative focus group with the members of the design team will be conducted to determine the improvements level and technical details for the design of residential buildings. Finally, the QFD will be developed to establish the degree of significance of the design’s solution.

Keywords: quality function development, construction projects, Saudi Arabia, quality tools

Procedia PDF Downloads 113
889 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks

Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos

Abstract:

Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.

Keywords: in-house networks, fiber optics, security risk, money

Procedia PDF Downloads 413
888 Phosphate Use Efficiency in Plants: A GWAS Approach to Identify the Pathways Involved

Authors: Azizah M. Nahari, Peter Doerner

Abstract:

Phosphate (Pi) is one of the essential macronutrients in plant growth and development, and it plays a central role in metabolic processes in plants, particularly photosynthesis and respiration. Limitation of crop productivity by Pi is widespread and is likely to increase in the future. Applications of Pi fertilizers have improved soil Pi fertility and crop production; however, they have also caused environmental damage. Therefore, in order to reduce dependence on unsustainable Pi fertilizers, a better understanding of phosphate use efficiency (PUE) is required for engineering nutrient-efficient crop plants. Enhanced Pi efficiency can be achieved by improved productivity per unit Pi taken up. We aim to identify, by using association mapping, general features of the most important loci that contribute to increased PUE to allow us to delineate the physiological pathways involved in defining this trait in the model plant Arabidopsis. As PUE is in part determined by the efficiency of uptake, we designed a hydroponic system to avoid confounding effects due to differences in root system architecture leading to differences in Pi uptake. In this system, 18 parental lines and 217 lines of the MAGIC population (a Multiparent Advanced Generation Inter-Cross) grown in high and low Pi availability conditions. The results showed revealed a large variation of PUE in the parental lines, indicating that the MAGIC population was well suited to identify PUE loci and pathways. 2 of 18 parental lines had the highest PUE in low Pi while some lines responded strongly and increased PUE with increased Pi. Having examined the 217 MAGIC population, considerable variance in PUE was found. A general feature was the trend of most lines to exhibit higher PUE when grown in low Pi conditions. Association mapping is currently in progress, but initial observations indicate that a wide variety of physiological processes are involved in influencing PUE in Arabidopsis. The combination of hydroponic growth methods and genome-wide association mapping is a powerful tool to identify the physiological pathways underpinning complex quantitative traits in plants.

Keywords: hydroponic system growth, phosphate use efficiency (PUE), Genome-wide association mapping, MAGIC population

Procedia PDF Downloads 316
887 Teaching Business Process Management using IBM’s INNOV8 BPM Simulation Game

Authors: Hossam Ali-Hassan, Michael Bliemel

Abstract:

This poster reflects upon our experiences using INNOV8, IBM’s Business Process Management (BPM) simulation game, in online MBA and undergraduate MIS classes over a period of 2 years. The game is designed to gives both business and information technology players a better understanding of how effective BPM impacts an entire business ecosystem. The game includes three different scenarios: Smarter Traffic, which is used to evaluate existing traffic patterns and re-route traffic based on incoming metrics; Smarter Customer Service where players develop more efficient ways to respond to customers in a call centre environment; and Smarter Supply Chains where players balance supply and demand and reduce environmental impact in a traditional supply chain model. We use the game as an experiential learning tool, where students have to act as managers making real time changes to business processes to meet changing business demands and environments. The students learn how information technology (IT) and information systems (IS) can be used to intelligently solve different problems and how computer simulations can be used to test different scenarios or models based on business decisions without having to actually make the potentially costly and/or disruptive changes to business processes. Moreover, when students play the three different scenarios, they quickly see how practical process improvements can help meet profitability, customer satisfaction and environmental goals while addressing real problems faced by municipalities and businesses today. After spending approximately two hours in the game, students reflect on their experience from it to apply several BPM principles that were presented in their textbook through the use of a structured set of assignment questions. For each final scenario students submit a screenshot of their solution followed by one paragraph explaining what criteria you were trying to optimize, and why they picked their input variables. In this poster we outline the course and the module’s learning objectives where we used the game to place this into context. We illustrate key features of the INNOV8 Simulation Game, and describe how we used them to reinforce theoretical concepts. The poster will also illustrate examples from the simulation, assignment, and learning outcomes.

Keywords: experiential learning, business process management, BPM, INNOV8, simulation, game

Procedia PDF Downloads 324
886 Coastal Modelling Studies for Jumeirah First Beach Stabilization

Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal

Abstract:

Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.

Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix

Procedia PDF Downloads 252
885 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 117
884 Ichthyofauna and Population Status at Indus River Downstream, Sindh-Pakistan

Authors: M. K. Sheikh, Y. M. Laghari., P. K. Lashari., N. T. Narejo

Abstract:

The Indus River is one of the longest important rivers of the world in Asia that flows southward through Pakistan, merges into the Arabian Sea near the port city of Karachi in Sindh Province, and forms the Indus Delta. Fish are an important resource for humans worldwide, especially as food. In fish, healthy nutriments are present which are not found in any other meat source because it have a huge quantity of omega- 3 fatty acids, which are very essential for the human body. Ichthyologic surveys were conducted to explore the diversity of freshwater fishes, distribution, abundance and current status of the fishes at different spatial scale of the downstream, Indus River. Total eight stations were selected namely Railo Miyan (RM), Karokho (Kk), Khanpur (Kp), Mullakatiyar (Mk), Wasi Malook Shah (WMS), Branch Morie (BM), Sujawal (Sj) and Jangseer (JS). The study was carried in the period of January 2016 to December 2019 to identify River and biodiversity threats and to suggest recommendations for conservation. The data were analysed by different population diversity index. Altogether, 124 species were recorded belonging to 12 Orders and 43 Families from the downstream of Indus River. Among 124 species, 29% belong to high commercial value and 35% were trash fishes. 31% of fishes were identified as marine/estuarine origin (migratory) and 05% were exotic fish species. Perciformes is the most predominated order, contributing to 41% of families. Among 43 families, the family Cyprinidae was the richest family from all localities of downstream, represented by 24% of fish species demonstrating a significant dominance in the number of species. A significant difference was observed for species abundance in between all sites, the maximum abundance species were found at first location RM having 115 species and minimum observed at the last station JS 56 genera. In the recorded Ichthyofauna, seven groups were found according to the International Union for Conservation of Nature status (IUCN), where a high species ratio was collected, in Least Concern (LC) having 94 species, 11 were found as not evaluated (NE), whereas 8 was identified as near threatened (NT), 1 was recorded as critically endangered (CR), 11 were collected as data deficient (DD), and while 8 was observed as vulnerable (VU) and 3 endangered (EN) species. Different diversity index has been used extensively in environmental studies to estimate the species richness and abundance of ecosystems outputs of their wellness; a positive environment (biodiversity rich) with species at RM had an environmental wellness and biodiversity levels of 4.566% while a negative control environment (biodiversity poor) on last station JS had an environmental wellness and biodiversity levels of 3.931%. The status of fish biodiversity and river has been found under serious threat. Due to the lower diversity of fishes, it became not only venerable for fish but also risky for fishermen. Necessary steps are recommended to protect the biodiversity by conducting further conservative research in this area.

Keywords: ichthyofaunal biodiversity, threatened species, diversity index, Indus River downstream

Procedia PDF Downloads 169
883 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 197
882 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform

Authors: Khadija Refouh

Abstract:

Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.

Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms

Procedia PDF Downloads 138
881 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 148
880 Regional Flood Frequency Analysis in Narmada Basin: A Case Study

Authors: Ankit Shah, R. K. Shrivastava

Abstract:

Flood and drought are two main features of hydrology which affect the human life. Floods are natural disasters which cause millions of rupees’ worth of damage each year in India and the whole world. Flood causes destruction in form of life and property. An accurate estimate of the flood damage potential is a key element to an effective, nationwide flood damage abatement program. Also, the increase in demand of water due to increase in population, industrial and agricultural growth, has let us know that though being a renewable resource it cannot be taken for granted. We have to optimize the use of water according to circumstances and conditions and need to harness it which can be done by construction of hydraulic structures. For their safe and proper functioning of hydraulic structures, we need to predict the flood magnitude and its impact. Hydraulic structures play a key role in harnessing and optimization of flood water which in turn results in safe and maximum use of water available. Mainly hydraulic structures are constructed on ungauged sites. There are two methods by which we can estimate flood viz. generation of Unit Hydrographs and Flood Frequency Analysis. In this study, Regional Flood Frequency Analysis has been employed. There are many methods for estimating the ‘Regional Flood Frequency Analysis’ viz. Index Flood Method. National Environmental and Research Council (NERC Methods), Multiple Regression Method, etc. However, none of the methods can be considered universal for every situation and location. The Narmada basin is located in Central India. It is drained by most of the tributaries, most of which are ungauged. Therefore it is very difficult to estimate flood on these tributaries and in the main river. As mentioned above Artificial Neural Network (ANN)s and Multiple Regression Method is used for determination of Regional flood Frequency. The annual peak flood data of 20 sites gauging sites of Narmada Basin is used in the present study to determine the Regional Flood relationships. Homogeneity of the considered sites is determined by using the Index Flood Method. Flood relationships obtained by both the methods are compared with each other, and it is found that ANN is more reliable than Multiple Regression Method for the present study area.

Keywords: artificial neural network, index flood method, multi layer perceptrons, multiple regression, Narmada basin, regional flood frequency

Procedia PDF Downloads 413
879 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 163
878 Integrating Multiple Types of Value in Natural Capital Accounting Systems: Environmental Value Functions

Authors: Pirta Palola, Richard Bailey, Lisa Wedding

Abstract:

Societies and economies worldwide fundamentally depend on natural capital. Alarmingly, natural capital assets are quickly depreciating, posing an existential challenge for humanity. The development of robust natural capital accounting systems is essential for transitioning towards sustainable economic systems and ensuring sound management of capital assets. However, the accurate, equitable and comprehensive estimation of natural capital asset stocks and their accounting values still faces multiple challenges. In particular, the representation of socio-cultural values held by groups or communities has arguably been limited, as to date, the valuation of natural capital assets has primarily been based on monetary valuation methods and assumptions of individual rationality. People relate to and value the natural environment in multiple ways, and no single valuation method can provide a sufficiently comprehensive image of the range of values associated with the environment. Indeed, calls have been made to improve the representation of multiple types of value (instrumental, intrinsic, and relational) and diverse ontological and epistemological perspectives in environmental valuation. This study addresses this need by establishing a novel valuation framework, Environmental Value Functions (EVF), that allows for the integration of multiple types of value in natural capital accounting systems. The EVF framework is based on the estimation and application of value functions, each of which describes the relationship between the value and quantity (or quality) of an ecosystem component of interest. In this framework, values are estimated in terms of change relative to the current level instead of calculating absolute values. Furthermore, EVF was developed to also support non-marginalist conceptualizations of value: it is likely that some environmental values cannot be conceptualized in terms of marginal changes. For example, ecological resilience value may, in some cases, be best understood as a binary: it either exists (1) or is lost (0). In such cases, a logistic value function may be used as the discriminator. Uncertainty in the value function parameterization can be considered through, for example, Monte Carlo sampling analysis. The use of EVF is illustrated with two conceptual examples. For the first time, EVF offers a clear framework and concrete methodology for the representation of multiple types of value in natural capital accounting systems, simultaneously enabling 1) the complementary use and integration of multiple valuation methods (monetary and non-monetary); 2) the synthesis of information from diverse knowledge systems; 3) the recognition of value incommensurability; 4) marginalist and non-marginalist value analysis. Furthermore, with this advancement, the coupling of EVF and ecosystem modeling can offer novel insights to the study of spatial-temporal dynamics in natural capital asset values. For example, value time series can be produced, allowing for the prediction and analysis of volatility, long-term trends, and temporal trade-offs. This approach can provide essential information to help guide the transition to a sustainable economy.

Keywords: economics of biodiversity, environmental valuation, natural capital, value function

Procedia PDF Downloads 189
877 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis

Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger

Abstract:

In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.

Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis

Procedia PDF Downloads 325
876 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material

Authors: Rossella Resi

Abstract:

This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.

Keywords: ICT, L2, language learning, language mediation, subtitling

Procedia PDF Downloads 408
875 Heteroatom Doped Binary Metal Oxide Modified Carbon as a Bifunctional Electrocatalysts for all Vanadium Redox Flow Battery

Authors: Anteneh Wodaje Bayeh, Daniel Manaye Kabtamu, Chen-Hao Wang

Abstract:

As one of the most promising electrochemical energy storage systems, vanadium redox flow batteries (VRFBs) have received increasing attention owing to their attractive features for largescale storage applications. However, their high production cost and relatively low energy efficiency still limit their feasibility. For practical implementation, it is of great interest to improve their efficiency and reduce their cost. One of the key components of VRFBs that can greatly influence the efficiency and final cost is the electrode, which provide the reactions sites for redox couples (VO²⁺/VO₂ + and V²⁺/V³⁺). Carbon-based materials are considered to be the most feasible electrode materials in the VRFB because of their excellent potential in terms of operation range, good permeability, large surface area, and reasonable cost. However, owing to limited electrochemical activity and reversibility and poor wettability due to its hydrophobic properties, the performance of the cell employing carbon-based electrodes remained limited. To address the challenges, we synthesized heteroatom-doped bimetallic oxide grown on the surface of carbon through the one-step approach. When applied to VRFBs, the prepared electrode exhibits significant electrocatalytic effect toward the VO²⁺/VO₂ + and V³⁺/V²⁺ redox reaction compared with that of pristine carbon. It is found that the presence of heteroatom on metal oxide promotes the absorption of vanadium ions. The controlled morphology of bimetallic metal oxide also exposes more active sites for the redox reaction of vanadium ions. Hence, the prepared electrode displays the best electrochemical performance with energy and voltage efficiencies of 74.8% and 78.9%, respectively, which is much higher than those of 59.8% and 63.2% obtained from the pristine carbon at high current density. Moreover, the electrode exhibit durability and stability in an acidic electrolyte during long-term operation for 1000 cycles at the higher current density.

Keywords: VRFB, VO²⁺/VO₂ + and V³⁺/V²⁺ redox couples, graphite felt, heteroatom-doping

Procedia PDF Downloads 89
874 The Selectivities of Pharmaceutical Spending Containment: Social Profit, Incentivization Games and State Power

Authors: Ben Main Piotr Ozieranski

Abstract:

State government spending on pharmaceuticals stands at 1 trillion USD globally, promoting criticism of the pharmaceutical industry's monetization of drug efficacy, product cost overvaluation, and health injustice. This paper elucidates the mechanisms behind a state-institutional response to this problem through the sociological lens of the strategic relational approach to state power. To do so, 30 expert interviews, legal and policy documents are drawn on to explain how state elites in New Zealand have successfully contested a 30-year “pharmaceutical spending containment policy”. Proceeding from Jessop's notion of strategic “selectivity”, encompassing analyses of the enabling features of state actors' ability to harness state structures, a theoretical explanation is advanced. First, a strategic context is described that consists of dynamics around pharmaceutical dealmaking between the state bureaucracy, pharmaceutical pricing strategies (and their effects), and the industry. Centrally, the pricing strategy of "bundling" -deals for packages of drugs that combine older and newer patented products- reflect how state managers have instigated an “incentivization game” that is played by state and industry actors, including HTA professionals, over pharmaceutical products (both current and in development). Second, a protective context is described that is comprised of successive legislative-judicial responses to the strategic context and characterized by the regulation and the societalisation of commercial law. Third, within the policy, the achievement of increased pharmaceutical coverage (pharmaceutical “mix”) alongside contained spending is conceptualized as a state defence of a "social profit". As such, in contrast to scholarly expectations that political and economic cultures of neo-liberalism drive pharmaceutical policy-making processes, New Zealand's state elites' approach is shown to be antipathetic to neo-liberals within an overall capitalist economy. The paper contributes an analysis of state pricing strategies and how they are embedded in state regulatory structures. Additionally, through an analysis of the interconnections of state power and pharmaceutical value Abrahams's neo-liberal corporate bias model for pharmaceutical policy analysis is problematised.

Keywords: pharmaceutical governance, pharmaceutical bureaucracy, pricing strategies, state power, value theory

Procedia PDF Downloads 65
873 Towards Learning Query Expansion

Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier

Abstract:

The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.

Keywords: supervised leaning, classification, query expansion, association rules

Procedia PDF Downloads 316
872 Colour and Travel: Design of an Innovative Infrastructure for Travel Applications with Entertaining and Playful Features

Authors: Avrokomi Zavitsanou, Spiros Papadopoulos, Theofanis Alexandridis

Abstract:

This paper presents the research project ‘Colour & Travel’, which is co-funded by the European Union and national resources through the Operational Programme “Competitiveness, Entrepreneurship and Innovation” 2014-2020, under the Single RTDI State Aid Action "RESEARCH - CREATE - INNOVATE". The research project proposes the design of an innovative, playful framework for exploring a variety of travel destinations and creating personalised travel narratives, aiming to entertain, educate, and promote culture and tourism. Gamification of the cultural and touristic environment can enhance its experiential, multi-sensory aspects and broaden the perception of the traveler. The latter's involvement in creating and shaping his personal travel narrations and the possibility of sharing it with others can offer him an alternative, more binding way of getting acquainted with a place. In particular, the paper presents the design of an infrastructure: (a) for the development of interactive travel guides for mobile devices, where sites with specific points of interest will be recommended, with which the user can interact in playful ways and then create his personal travel narratives, (b) for the development of innovative games within virtual reality environment, where the interaction will be offered while the user is moving within the virtual environment; and (c) for an online application where the content will be offered through the browser and the modern 3D imaging technologies (WebGL). The technological products that will be developed within the proposed project can strengthen important sectors of economic and social life, such as trade, tourism, exploitation and promotion of the cultural environment, creative industries, etc. The final applications delivered at the end of the project will guarantee an improved level of service for visitors and will be a useful tool for content creators with increased adaptability, expansibility, and applicability in many regions of Greece and abroad. This paper aims to present the research project by referencing the state of the art and the methodological scheme, ending with a brief reflection on the expected outcome in terms of results.

Keywords: gamification, culture, tourism, AR, VR, applications

Procedia PDF Downloads 138
871 Discriminating Between Energy Drinks and Sports Drinks Based on Their Chemical Properties Using Chemometric Methods

Authors: Robert Cazar, Nathaly Maza

Abstract:

Energy drinks and sports drinks are quite popular among young adults and teenagers worldwide. Some concerns regarding their health effects – particularly those of the energy drinks - have been raised based on scientific findings. Differentiating between these two types of drinks by means of their chemical properties seems to be an instructive task. Chemometrics provides the most appropriate strategy to do so. In this study, a discrimination analysis of the energy and sports drinks has been carried out applying chemometric methods. A set of eleven samples of available commercial brands of drinks – seven energy drinks and four sports drinks – were collected. Each sample was characterized by eight chemical variables (carbohydrates, energy, sugar, sodium, pH, degrees Brix, density, and citric acid). The data set was standardized and examined by exploratory chemometric techniques such as clustering and principal component analysis. As a preliminary step, a variable selection was carried out by inspecting the variable correlation matrix. It was detected that some variables are redundant, so they can be safely removed, leaving only five variables that are sufficient for this analysis. They are sugar, sodium, pH, density, and citric acid. Then, a hierarchical clustering `employing the average – linkage criterion and using the Euclidian distance metrics was performed. It perfectly separates the two types of drinks since the resultant dendogram, cut at the 25% similarity level, assorts the samples in two well defined groups, one of them containing the energy drinks and the other one the sports drinks. Further assurance of the complete discrimination is provided by the principal component analysis. The projection of the data set on the first two principal components – which retain the 71% of the data information – permits to visualize the distribution of the samples in the two groups identified in the clustering stage. Since the first principal component is the discriminating one, the inspection of its loadings consents to characterize such groups. The energy drinks group possesses medium to high values of density, citric acid, and sugar. The sports drinks group, on the other hand, exhibits low values of those variables. In conclusion, the application of chemometric methods on a data set that features some chemical properties of a number of energy and sports drinks provides an accurate, dependable way to discriminate between these two types of beverages.

Keywords: chemometrics, clustering, energy drinks, principal component analysis, sports drinks

Procedia PDF Downloads 100
870 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)

Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo

Abstract:

Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.

Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense

Procedia PDF Downloads 209
869 The Impact of WhatsApp Groups as Supportive Technology in Teaching

Authors: Pinn Tsin Isabel Yee

Abstract:

With the advent of internet technologies, students are increasingly turning toward social media and cross-platform messaging apps such as WhatsApp, Line, and WeChat to support their teaching and learning processes. Although each messaging app has varying features, WhatsApp remains one of the most popular cross-platform apps that allow for fast, simple, secure messaging and free calls anytime, anywhere. With a plethora of advantages, students could easily assimilate WhatsApp as a supportive technology in their learning process. There could be peer to peer learning, and a teacher will be able to share knowledge digitally via the creation of WhatsApp groups. Content analysis techniques were utilized to analyze data collected by closed-ended question forms. Studies demonstrated that 98.8% of college students (n=80) from the Monash University foundation year agreed that the employment of WhatsApp groups was helpful as a learning tool. Approximately 71.3% disagreed that notifications and alerts from the WhatsApp group were disruptions in their studies. Students commented that they could silence the notifications and hence, it would not disturb their flow of thoughts. In fact, an overwhelming majority of students (95.0%) found it enjoyable to participate in WhatsApp groups for educational purposes. It was a common perception that some students felt pressured to post a reply in such groups, but data analysis showed that 72.5% of students did not feel pressured to comment or reply. It was good that 93.8% of students felt satisfactory if their posts were not responded to speedily, but was eventually attended to. Generally, 97.5% of students found it useful if their teachers provided their handphone numbers to be added to a WhatsApp group. If a teacher posts an explanation or a mathematical working in the group, all students would be able to view the post together, as opposed to individual students asking their teacher a similar question. On whether students preferred using Facebook as a learning tool, there was a 50-50 divide in the replies from the respondents as 51.3% of students liked WhatsApp, while 48.8% preferred Facebook as a supportive technology in teaching and learning. Taken altogether, the utilization of WhatsApp groups as a supportive technology in teaching and learning should be implemented in all classes to continuously engage our generation Y students in the ever-changing digital landscape.-

Keywords: education, learning, messaging app, technology, WhatsApp groups

Procedia PDF Downloads 154
868 Study of Isoprene Emissions in Biogenic ad Anthropogenic Environment in Urban Atmosphere of Delhi: The Capital City of India

Authors: Prabhat Kashyap, Krishan Kumar

Abstract:

Delhi, the capital of India, is one of the most populated and polluted city among the world. In terms of air quality, Delhi’s air is degrading day by day & becomes worst of any major city in the world. The role of biogenic volatile organic compounds (BVOCs) is not much studied in cities like Delhi as a culprit for degraded air quality. They not only play a critical role in rural areas but also determine the atmospheric chemistry of urban areas as well. Particularly, Isoprene (2-methyl 1,3-butadiene, C5H8) is the single largest emitted compound among other BVOCs globally, that influence the tropospheric ozone chemistry in urban environment as the ozone forming potential of isoprene is very high. It is mainly emitted by vegetation & a small but significant portion is also released by vehicular exhaust of petrol operated vehicles. This study investigates the spatial and temporal variations of quantitative measurements of isoprene emissions along with different traffic tracers in 2 different seasons (post-monsoon & winter) at four different locations of Delhi. For the quantification of anthropogenic and biogenic isoprene, two sites from traffic intersections (Punjabi Bagh & CRRI) and two sites from vegetative locations (JNU & Yamuna Biodiversity Park) were selected in the vicinity of isoprene emitting tree species like Ficus religiosa, Dalbergia sissoo, Eucalyptus species etc. The concentrations of traffic tracers like benzene, toluene were also determined & their robust ratios with isoprene were used to differentiate anthropogenic isoprene with biogenic portion at each site. The ozone forming potential (OFP) of all selected species along with isoprene was also estimated. For collection of intra-day samples (3 times a day) in each season, a pre-conditioned fenceline monitoring (FLM) carbopack X thermal desorption tubes were used and further analysis was done with Gas chromatography attached with mass spectrometry (GC-MS). The results of the study proposed that the ambient air isoprene is always higher in post-monsoon season as compared to winter season at all the sites because of high temperature & intense sunlight. The maximum isoprene emission flux was always observed during afternoon hours in both seasons at all sites. The maximum isoprene concentration was found to be 13.95 ppbv at Biodiversity Park during afternoon time in post monsoon season while the lower concentration was observed as low as 0.07 ppbv at the same location during morning hours in winter season. OFP of isoprene at vegetation sites is very high during post-monsoon because of high concentrations. However, OFP for other traffic tracers were high during winter seasons & at traffic locations. Furthermore, high correlation between isoprene emissions with traffic volume at traffic sites revealed that a noteworthy share of its emission also originates from road traffic.

Keywords: biogenic VOCs, isoprene emission, anthropogenic isoprene, urban vegetation

Procedia PDF Downloads 113