Search results for: realistic images
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2850

Search results for: realistic images

450 An Efficient Algorithm for Solving the Transmission Network Expansion Planning Problem Integrating Machine Learning with Mathematical Decomposition

Authors: Pablo Oteiza, Ricardo Alvarez, Mehrdad Pirnia, Fuat Can

Abstract:

To effectively combat climate change, many countries around the world have committed to a decarbonisation of their electricity, along with promoting a large-scale integration of renewable energy sources (RES). While this trend represents a unique opportunity to effectively combat climate change, achieving a sound and cost-efficient energy transition towards low-carbon power systems poses significant challenges for the multi-year Transmission Network Expansion Planning (TNEP) problem. The objective of the multi-year TNEP is to determine the necessary network infrastructure to supply the projected demand in a cost-efficient way, considering the evolution of the new generation mix, including the integration of RES. The rapid integration of large-scale RES increases the variability and uncertainty in the power system operation, which in turn increases short-term flexibility requirements. To meet these requirements, flexible generating technologies such as energy storage systems must be considered within the TNEP as well, along with proper models for capturing the operational challenges of future power systems. As a consequence, TNEP formulations are becoming more complex and difficult to solve, especially for its application in realistic-sized power system models. To meet these challenges, there is an increasing need for developing efficient algorithms capable of solving the TNEP problem with reasonable computational time and resources. In this regard, a promising research area is the use of artificial intelligence (AI) techniques for solving large-scale mixed-integer optimization problems, such as the TNEP. In particular, the use of AI along with mathematical optimization strategies based on decomposition has shown great potential. In this context, this paper presents an efficient algorithm for solving the multi-year TNEP problem. The algorithm combines AI techniques with Column Generation, a traditional decomposition-based mathematical optimization method. One of the challenges of using Column Generation for solving the TNEP problem is that the subproblems are of mixed-integer nature, and therefore solving them requires significant amounts of time and resources. Hence, in this proposal we solve a linearly relaxed version of the subproblems, and trained a binary classifier that determines the value of the binary variables, based on the results obtained from the linearized version. A key feature of the proposal is that we integrate the binary classifier into the optimization algorithm in such a way that the optimality of the solution can be guaranteed. The results of a study case based on the HRP 38-bus test system shows that the binary classifier has an accuracy above 97% for estimating the value of the binary variables. Since the linearly relaxed version of the subproblems can be solved with significantly less time than the integer programming counterpart, the integration of the binary classifier into the Column Generation algorithm allowed us to reduce the computational time required for solving the problem by 50%. The final version of this paper will contain a detailed description of the proposed algorithm, the AI-based binary classifier technique and its integration into the CG algorithm. To demonstrate the capabilities of the proposal, we evaluate the algorithm in case studies with different scenarios, as well as in other power system models.

Keywords: integer optimization, machine learning, mathematical decomposition, transmission planning

Procedia PDF Downloads 43
449 Capacity Building on Small Automatic Tracking Antenna Development for Thailand Space Sustainability

Authors: Warinthorn Kiadtikornthaweeyot Evans, Nawattakorn Kaikaew

Abstract:

The communication system between the ground station and the satellite is very important to guarantee contact between both sides. Thailand, led by Geo-Informatics and Space Technology Development Agency (GISTDA), has received satellite images from other nation's satellites for a number of years. In 2008, Thailand Earth Observation Satellite (THEOS) was the first Earth observation satellite owned by Thailand. The mission was monitoring our country with affordable access to space-based Earth imagery. At this time, the control ground station was initially used to control the THEOS satellite by our Thai engineers. The Tele-commands were sent to the satellite according to requests from government and private sectors. Since then, GISTDA's engineers have gained their skill and experience to operate the satellite. Recently the desire to use satellite data is increasing rapidly due to space technology moving fast and giving us more benefits. It is essential to ensure that Thailand remains competitive in space technology. Thai Engineers have started to improve the performance of the control ground station in many different sections, also developing skills and knowledge in areas of satellite communication. Human resource skills are being enforced with development projects through capacity building. This paper focuses on the hands-on capacity building of GISTDA's engineers to develop a small automatic tracking antenna. The final achievement of the project is the first phase prototype of a small automatic tracking antenna to support the new technology of the satellites. There are two main subsystems that have been developed and tested; the tracking system and the monitoring and control software. The prototype first phase functions testing has been performed with Two Line Element (TLE) and the mission planning plan (MPP) file calculated from THEOS satellite by GISTDA.

Keywords: capacity building, small tracking antenna, automatic tracking system, project development procedure

Procedia PDF Downloads 46
448 Level of Sociality and Sting Autotomy

Authors: V. V. Belavadi, Syed Najeer E. Noor Khadri, Shivamurthy Naik

Abstract:

Members of aculeate Hymenoptera exhibit different levels of sociality. While Chrysidoidea are primarily parasitic and use their sting only for the purpose parasitizing the host and never for defense, all vespoid and apoid (sphecid) wasps use their sting for paralysing their prey as well as for defending themselves from predators and intruders. Though most apoid bees use their sting for defending themselves, a few bees (Apis spp.) use their sting exclusively for defending their colonies and the brood. A preliminary study conducted on the comparative morphology of stings of apoid bees and wasps and that of vespid wasps, indicated that the backward projected barbs are more pronounced only in the genus Apis, which is considered as the reason why a honey bee worker, loses its sting and dies when it stings a higher animal. This raises an important question: How barbs on lancets of Apis bees evolved? Supposing the barbs had not been strong, the worker bee would have been more efficient in defending the colony instead of only once in its lifetime! Some arguments in favour of worker altruistic behaviour, mention that in highly social insects, the colony size is large, workers are closely related among themselves and a worker sacrificing its life for the colony is beneficial for the colony. However, in colonies with a queen that has mated multiple times, the coefficient of relatedness among workers gets reduced and still the workers continue to exhibit the same behaviour. In this paper, we have tried to compare the morphology of stings of aculeate Hymenoptera and have attempted to relate sting morphology with social behaviour. Species examined for sting morphology are A. cerana, Apis dorsata, A. florea, Amegilla violacea, A. zonata, Megachile anthracina, M. Disjuncta, Liris aurulentus, Tachysphex bengalensis. Our studies indicate that occurrence of barbs on lancets correlates with the degree of sociality and sting autotomy is more pronounced in swarm-founding species than in haplometrotic species. The number of barbs on the lancets varied from 0 to 11. Additionally SEM images also revealed interesting characters of barbs.

Keywords: altruistic, barbs, sociality, sting autotomy

Procedia PDF Downloads 292
447 A Proposed Framework for Better Managing Small Group Projects on an Undergraduate Foundation Programme at an International University Campus

Authors: Sweta Rout-Hoolash

Abstract:

Each year, selected students from around 20 countries begin their degrees at Middlesex University with the International Foundation Program (IFP), developing the skills required for academic study at a UK university. The IFP runs for 30 learning/teaching weeks at Middlesex University Mauritius Branch Campus, which is an international campus of UK’s Middlesex University. Successful IFP students join their degree courses already settled into life at their chosen campus (London, Dubai, Mauritius or Malta) and confident that they understand what is required for degree study. Although part of the School of Science and Technology, in Mauritius it prepares students for undergraduate level across all Schools represented on campus – including disciplines such as Accounting, Business, Computing, Law, Media and Psychology. The researcher has critically reviewed the framework and resources in the curriculum for a particular six week period of IFP study (dedicated group work phase). Despite working together closely for 24 weeks, IFP students approach the final 6 week small group work project phase with mainly inhibitive feelings. It was observed that students did not engage effectively in the group work exercise. Additionally, groups who seemed to be working well did not necessarily produce results reflecting effective collaboration, nor individual members’ results which were better than prior efforts. The researcher identified scope for change and innovation in the IFP curriculum and how group work is introduced and facilitated. The study explores the challenges of groupwork in the context of the Mauritius campus, though it is clear that the implications of the project are not restricted to one campus only. The presentation offers a reflective review on the previous structure put in place for the management of small group assessed projects on the programme from both the student and tutor perspective. The focus of the research perspective is the student voice, by taking into consideration past and present IFP students’ experiences as written in their learning journals. Further, it proposes the introduction of a revised framework to help students take greater ownership of the group work process in order to engage more effectively with the learning outcomes of this crucial phase of the programme. The study has critically reviewed recent and seminal literature on how to achieve greater student ownership during this phase especially under an environment of assessed multicultural group work. The presentation proposes several new approaches for encouraging students to take more control of the collaboration process. Detailed consideration is given to how the proposed changes impact on the work of other stakeholders, or partners to student learning. Clear proposals are laid out for evaluation of the different approaches intended to be implemented during the upcoming academic year (student voice through their own submitted reflections, focus group interviews and through the assessment results). The proposals presented are all realistic and have the potential to transform students’ learning. Furthermore, the study has engaged with the UK Professional Standards Framework for teaching and supporting learning in higher education, and demonstrates practice at the level of ‘fellow’ of the Higher Education Academy (HEA).

Keywords: collaborative peer learning, enhancing learning experiences, group work assessment, learning communities, multicultural diverse classrooms, studying abroad

Procedia PDF Downloads 297
446 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 186
445 Image Based Landing Solutions for Large Passenger Aircraft

Authors: Thierry Sammour Sawaya, Heikki Deschacht

Abstract:

In commercial aircraft operations, almost half of the accidents happen during approach or landing phases. Automatic guidance and automatic landings have proven to bring significant safety value added for this challenging landing phase. This is why Airbus and ScioTeq have decided to work together to explore the capability of image-based landing solutions as additional landing aids to further expand the possibility to perform automatic approach and landing to runways where the current guiding systems are either not fitted or not optimum. Current systems for automated landing often depend on radio signals provided by airport ground infrastructure on the airport or satellite coverage. In addition, these radio signals may not always be available with the integrity and performance required for safe automatic landing. Being independent from these radio signals would widen the operations possibilities and increase the number of automated landings. Airbus and ScioTeq are joining their expertise in the field of Computer Vision in the European Program called Clean Sky 2 Large Passenger Aircraft, in which they are leading the IMBALS (IMage BAsed Landing Solutions) project. The ultimate goal of this project is to demonstrate, develop, validate and verify a certifiable automatic landing system guiding an airplane during the approach and landing phases based on an onboard camera system capturing images, enabling automatic landing independent from radio signals and without precision instrument for landing. In the frame of this project, ScioTeq is responsible for the development of the Image Processing Platform (IPP), while Airbus is responsible for defining the functional and system requirements as well as the testing and integration of the developed equipment in a Large Passenger Aircraft representative environment. The aim of this paper will be to describe the system as well as the associated methods and tools developed for validation and verification.

Keywords: aircraft landing system, aircraft safety, autoland, avionic system, computer vision, image processing

Procedia PDF Downloads 62
444 A Reading Attempt of the Urban Memory of Jordan University of Science and Technology Campus by Cognitive Mapping

Authors: Bsma Adel Bany Mohammad

Abstract:

The University campuses are a small city containing basic city functions such as educational spaces, accommodations, services and transportation. They are spaces of functional and social life with different activities, different occupants. The campus designed and transformed like cities so both experienced and memorized in same way. Campus memory is the ability of individuals to maintain and reveal the spatial components of designed physical spaces, which form the understandings, experiences, sensations of the environment in all. ‘Cognitive mapping’ is used to decode the physical interaction and emotional relationship between individuals and the city; Cognitive maps are created graphically using geometric and verbal elements on paper by remembering the images of the Urban Environment. In this study, to determine the emotional urban identity belonging to Jordan University of science and technology Campus, architecture students Asked to identify the areas they interact with in the campus by drawing a cognitive map. ‘Campus memory items’ are identified by analyzing the cognitive maps of the campus, then the spatial identity result of such data. The analysis based on the five basic elements of Lynch: paths, districts, edges, nodes, and landmarks. As a result of this analysis, it found that Spatial Identity constructed by the shared elements of the maps. The memory of most students listed the gates structure- which is a large desirable structure, located at the main entrances within the campus defined as major landmarks, then the square spaces defined as nodes, in addition to both stairs and corridors defined as paths. Finally, the districts, edges of educational buildings and service spaces are listed correspondingly in cognitive maps. Findings suggest that the spatial identity of the campus design is related mainly to the gates structures, squares and stairs.

Keywords: cognitive maps, university campus, urban memory, identity

Procedia PDF Downloads 120
443 Structural Properties of Surface Modified PVA: Zn97Pr3O Polymer Nanocomposite Free Standing Films

Authors: Pandiyarajan Thangaraj, Mangalaraja Ramalinga Viswanathan, Karthikeyan Balasubramanian, Héctor D. Mansilla, José Ruiz

Abstract:

Rare earth ions doped semiconductor nanostructures gained much attention due to their novel physical and chemical properties which lead to potential applications in laser technology as inexpensive luminescent materials. Doping of rare earth ions into ZnO semiconductor alter its electronic structure and emission properties. Surface modification (polymer covering) is one of the simplest techniques to modify the emission characteristics of host materials. The present work reports the synthesis and structural properties of PVA:Zn97Pr3O polymer nanocomposite free standing films. To prepare Pr3+ doped ZnO nanostructures and PVA:Zn97Pr3O polymer nanocomposite free standing films, the colloidal chemical and solution casting techniques were adopted, respectively. The formation of PVA:Zn97Pr3O films were confirmed through X-ray diffraction (XRD), absorption and Fourier transform infrared (FTIR) spectroscopy analyses. XRD measurements confirm the prepared materials are crystalline having hexagonal wurtzite structure. Polymer composite film exhibits the diffraction peaks of both PVA and ZnO structures. TEM images reveal the pure and Pr3+ doped ZnO nanostructures exhibit sheet like morphology. Optical absorption spectra show free excitonic absorption band of ZnO at 370 nm and, the PVA:Zn97Pr3O polymer film shows absorption bands at ~282 and 368 nm and these arise due to the presence of carbonyl containing structures connected to the PVA polymeric chains, mainly at the ends and free excitonic absorption of ZnO nanostructures, respectively. Transmission spectrum of as prepared film shows 57 to 69% of transparency in the visible and near IR region. FTIR spectral studies confirm the presence of A1 (TO) and E1 (TO) modes of Zn-O bond vibration and the formation of polymer composite materials.

Keywords: rare earth doped ZnO, polymer composites, structural characterization, surface modification

Procedia PDF Downloads 339
442 The Impact of Sign Language on Generating and Maintaining a Mental Image

Authors: Yi-Shiuan Chiu

Abstract:

Deaf signers have been found to have better mental image performance than hearing nonsigners. The goal of this study was to investigate the ability to generate mental images, to maintain them, and to manipulate them in deaf signers of Taiwanese Sign Language (TSL). In the visual image task, participants first memorized digits formed in a cell of 4 × 5 grids. After presenting a cue of Chinese digit character shown on the top of a blank cell, participants had to form a corresponding digit. When showing a probe, which was a grid containing a red circle, participants had to decide as quickly as possible whether the probe would have been covered by the mental image of the digit. The ISI (interstimulus interval) between cue and probe was manipulated. In experiment 1, 24 deaf signers and 24 hearing nonsigners were asked to perform image generation tasks (ISI: 200, 400 ms) and image maintenance tasks (ISI: 800, 2000 ms). The results showed that deaf signers had had an enhanced ability to generate and maintain a mental image. To explore the process of mental image, in experiment 2, 30 deaf signers and 30 hearing nonsigners were asked to do visual searching when maintaining a mental image. Between a digit image cue and a red circle probe, participants were asked to search a visual search task to see if a target triangle apex was directed to the right or left. When there was only one triangle in the searching task, the results showed that both deaf signers and hearing non-signers had similar visual searching performance in which the searching targets in the mental image locations got facilitates. However, deaf signers could maintain better and faster mental image performance than nonsigners. In experiment 3, we increased the number of triangles to 4 to raise the difficulty of the visual search task. The results showed that deaf participants performed more accurately in visual search and image maintenance tasks. The results suggested that people may use eye movements as a mnemonic strategy to maintain the mental image. And deaf signers had enhanced abilities to resist the interference of eye movements in the situation of fewer distractors. In sum, these findings suggested that deaf signers had enhanced mental image processing.

Keywords: deaf signers, image maintain, mental image, visual search

Procedia PDF Downloads 128
441 Rainfall Estimation over Northern Tunisia by Combining Meteosat Second Generation Cloud Top Temperature and Tropical Rainfall Measuring Mission Microwave Imager Rain Rates

Authors: Saoussen Dhib, Chris M. Mannaerts, Zoubeida Bargaoui, Ben H. P. Maathuis, Petra Budde

Abstract:

In this study, a new method to delineate rain areas in northern Tunisia is presented. The proposed approach is based on the blending of the geostationary Meteosat Second Generation (MSG) infrared channel (IR) with the low-earth orbiting passive Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). To blend this two products, we need to apply two main steps. Firstly, we have to identify the rainy pixels. This step is achieved based on a classification using MSG channel IR 10.8 and the water vapor WV 0.62, applying a threshold on the temperature difference of less than 11 Kelvin which is an approximation of the clouds that have a high likelihood of precipitation. The second step consists on fitting the relation between IR cloud top temperature with the TMI rain rates. The correlation coefficient of these two variables has a negative tendency, meaning that with decreasing temperature there is an increase in rainfall intensity. The fitting equation will be applied for the whole day of MSG 15 minutes interval images which will be summed. To validate this combined product, daily extreme rainfall events occurred during the period 2007-2009 were selected, using a threshold criterion for large rainfall depth (> 50 mm/day) occurring at least at one rainfall station. Inverse distance interpolation method was applied to generate rainfall maps for the drier summer season (from May to October) and the wet winter season (from November to April). The evaluation results of the estimated rainfall combining MSG and TMI was very encouraging where all the events were detected rainy and the correlation coefficients were much better than previous evaluated products over the study area such as MSGMPE and PERSIANN products. The combined product showed a better performance during wet season. We notice also an overestimation of the maximal estimated rain for many events.

Keywords: combination, extreme, rainfall, TMI-MSG, Tunisia

Procedia PDF Downloads 145
440 Kernel-Based Double Nearest Proportion Feature Extraction for Hyperspectral Image Classification

Authors: Hung-Sheng Lin, Cheng-Hsuan Li

Abstract:

Over the past few years, kernel-based algorithms have been widely used to extend some linear feature extraction methods such as principal component analysis (PCA), linear discriminate analysis (LDA), and nonparametric weighted feature extraction (NWFE) to their nonlinear versions, kernel principal component analysis (KPCA), generalized discriminate analysis (GDA), and kernel nonparametric weighted feature extraction (KNWFE), respectively. These nonlinear feature extraction methods can detect nonlinear directions with the largest nonlinear variance or the largest class separability based on the given kernel function. Moreover, they have been applied to improve the target detection or the image classification of hyperspectral images. The double nearest proportion feature extraction (DNP) can effectively reduce the overlap effect and have good performance in hyperspectral image classification. The DNP structure is an extension of the k-nearest neighbor technique. For each sample, there are two corresponding nearest proportions of samples, the self-class nearest proportion and the other-class nearest proportion. The term “nearest proportion” used here consider both the local information and other more global information. With these settings, the effect of the overlap between the sample distributions can be reduced. Usually, the maximum likelihood estimator and the related unbiased estimator are not ideal estimators in high dimensional inference problems, particularly in small data-size situation. Hence, an improved estimator by shrinkage estimation (regularization) is proposed. Based on the DNP structure, LDA is included as a special case. In this paper, the kernel method is applied to extend DNP to kernel-based DNP (KDNP). In addition to the advantages of DNP, KDNP surpasses DNP in the experimental results. According to the experiments on the real hyperspectral image data sets, the classification performance of KDNP is better than that of PCA, LDA, NWFE, and their kernel versions, KPCA, GDA, and KNWFE.

Keywords: feature extraction, kernel method, double nearest proportion feature extraction, kernel double nearest feature extraction

Procedia PDF Downloads 298
439 European Food Safety Authority (EFSA) Safety Assessment of Food Additives: Data and Methodology Used for the Assessment of Dietary Exposure for Different European Countries and Population Groups

Authors: Petra Gergelova, Sofia Ioannidou, Davide Arcella, Alexandra Tard, Polly E. Boon, Oliver Lindtner, Christina Tlustos, Jean-Charles Leblanc

Abstract:

Objectives: To assess chronic dietary exposure to food additives in different European countries and population groups. Method and Design: The European Food Safety Authority’s (EFSA) Panel on Food Additives and Nutrient Sources added to Food (ANS) estimates chronic dietary exposure to food additives with the purpose of re-evaluating food additives that were previously authorized in Europe. For this, EFSA uses concentration values (usage and/or analytical occurrence data) reported through regular public calls for data by food industry and European countries. These are combined, at individual level, with national food consumption data from the EFSA Comprehensive European Food Consumption Database including data from 33 dietary surveys from 19 European countries and considering six different population groups (infants, toddlers, children, adolescents, adults and the elderly). EFSA ANS Panel estimates dietary exposure for each individual in the EFSA Comprehensive Database by combining the occurrence levels per food group with their corresponding consumption amount per kg body weight. An individual average exposure per day is calculated, resulting in distributions of individual exposures per survey and population group. Based on these distributions, the average and 95th percentile of exposure is calculated per survey and per population group. Dietary exposure is assessed based on two different sets of data: (a) Maximum permitted levels (MPLs) of use set down in the EU legislation (defined as regulatory maximum level exposure assessment scenario) and (b) usage levels and/or analytical occurrence data (defined as refined exposure assessment scenario). The refined exposure assessment scenario is sub-divided into the brand-loyal consumer scenario and the non-brand-loyal consumer scenario. For the brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the highest reported usage/analytical level for one food group, and at the mean level for the remaining food groups. For the non-brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the mean reported usage/analytical level for all food groups. An additional exposure from sources other than direct addition of food additives (i.e. natural presence, contaminants, and carriers of food additives) is also estimated, as appropriate. Results: Since 2014, this methodology has been applied in about 30 food additive exposure assessments conducted as part of scientific opinions of the EFSA ANS Panel. For example, under the non-brand-loyal scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 5.9 and 8.7 mg/kg body weight/day, respectively. The same estimates under the brand-loyal scenario in toddlers resulted in exposures of 8.1 and 20.7 mg/kg body weight/day, respectively. For the regulatory maximum level exposure assessment scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 11.9 and 30.3 mg/kg body weight/day, respectively. Conclusions: Detailed and up-to-date information on food additive concentration values (usage and/or analytical occurrence data) and food consumption data enable the assessment of chronic dietary exposure to food additives to more realistic levels.

Keywords: α-tocopherol, ammonium phosphatides, dietary exposure assessment, European Food Safety Authority, food additives, food consumption data

Procedia PDF Downloads 288
438 Improving the Utility of Social Media in Pharmacovigilance: A Mixed Methods Study

Authors: Amber Dhoot, Tarush Gupta, Andrea Gurr, William Jenkins, Sandro Pietrunti, Alexis Tang

Abstract:

Background: The COVID-19 pandemic has driven pharmacovigilance towards a new paradigm. Nowadays, more people than ever before are recognising and reporting adverse reactions from medications, treatments, and vaccines. In the modern era, with over 3.8 billion users, social media has become the most accessible medium for people to voice their opinions and so provides an opportunity to engage with more patient-centric and accessible pharmacovigilance. However, the pharmaceutical industry has been slow to incorporate social media into its modern pharmacovigilance strategy. This project aims to make social media a more effective tool in pharmacovigilance, and so reduce drug costs, improve drug safety and improve patient outcomes. This will be achieved by firstly uncovering and categorising the barriers facing the widespread adoption of social media in pharmacovigilance. Following this, the potential opportunities of social media will be explored. We will then propose realistic, practical recommendations to make social media a more effective tool for pharmacovigilance. Methodology: A comprehensive systematic literature review was conducted to produce a categorised summary of these barriers. This was followed by conducting 11 semi-structured interviews with pharmacovigilance experts to confirm the literature review findings whilst also exploring the unpublished and real-life challenges faced by those in the pharmaceutical industry. Finally, a survey of the general public (n = 112) ascertained public knowledge, perception, and opinion regarding the use of their social media data for pharmacovigilance purposes. This project stands out by offering perspectives from the public and pharmaceutical industry that fill the research gaps identified in the literature review. Results: Our results gave rise to several key analysis points. Firstly, inadequacies of current Natural Language Processing algorithms hinder effective pharmacovigilance data extraction from social media, and where data extraction is possible, there are significant questions over its quality. Social media also contains a variety of biases towards common drugs, mild adverse drug reactions, and the younger generation. Additionally, outdated regulations for social media pharmacovigilance do not align with new, modern General Data Protection Regulations (GDPR), creating ethical ambiguity about data privacy and level of access. This leads to an underlying mindset of avoidance within the pharmaceutical industry, as firms are disincentivised by the legal, financial, and reputational risks associated with breaking ambiguous regulations. Conclusion: Our project uncovered several barriers that prevent effective pharmacovigilance on social media. As such, social media should be used to complement traditional sources of pharmacovigilance rather than as a sole source of pharmacovigilance data. However, this project adds further value by proposing five practical recommendations that improve the effectiveness of social media pharmacovigilance. These include: prioritising health-orientated social media; improving technical capabilities through investment and strategic partnerships; setting clear regulatory guidelines using multi-stakeholder processes; creating an adverse drug reaction reporting interface inbuilt into social media platforms; and, finally, developing educational campaigns to raise awareness of the use of social media in pharmacovigilance. Implementation of these recommendations would speed up the efficient, ethical, and systematic adoption of social media in pharmacovigilance.

Keywords: adverse drug reaction, drug safety, pharmacovigilance, social media

Procedia PDF Downloads 47
437 Biomechanics of Atalantoaxial Complex for Various Posterior Fixation Techniques

Authors: Arun C. O., Shrijith M. B., Thakur Rajesh Singh

Abstract:

The study aims to analyze and understand the biomechanical stability of the atlantoaxial complex under different posterior fixation techniques using the finite element method in the Indian context. The conventional cadaveric studies performed show heterogeneity in biomechanical properties. The finite element method being a versatile numerical tool, is being wisely used for biomechanics analysis of atlantoaxial complex. However, the biomechanics of posterior fixation techniques for an Indian subject is missing in the literature. It is essential to study in this context as the bone density and geometry of vertebrae vary from region to region, thereby requiring different screw lengths and it can affect the range of motion(ROM), stresses generated. The current study uses CT images for developing a 3D finite element model with C1-C2 geometry without ligaments. Instrumentation is added to this geometry to develop four models for four fixation techniques, namely C1-C2 TA, C1LM-C2PS, C1LM-C2Pars, C1LM-C2TL. To simulate Flexion, extension, lateral bending, axial rotation, 1.5 Nm is applied to C1 while the bottom nodes of C2 are fixed. Then Range of Motion (ROM) is compared with the unstable model(without ligaments). All the fixation techniques showed more than 97 percent reduction in the Range of Motion. The von-mises stresses developed in the screw constructs are obtained. From the studies, it is observed that Transarticular technique is most stable in Lateral Bending, C1LM-C2 Translaminar is found most stable in Flexion/extension. The Von-Mises stresses developed minimum in Trasarticular technique in lateral bending and axial rotation, whereas stress developed in C2 pars construct minimum in Flexion/ Extension. On average, the TA technique is stable in all motions and also stresses in constructs are less in TA. Tarnsarticular technique is found to be the best fixation technique for Indian subjects among the 4 methods.

Keywords: biomechanics, cervical spine, finite element model, posterior fixation

Procedia PDF Downloads 116
436 A Remote Sensing Approach to Estimate the Paleo-Discharge of the Lost Saraswati River of North-West India

Authors: Zafar Beg, Kumar Gaurav

Abstract:

The lost Saraswati is described as a large perennial river which was 'lost' in the desert towards the end of the Indus-Saraswati civilisation. It has been proposed earlier that the lost Saraswati flowed in the Sutlej-Yamuna interfluve, parallel to the present day Indus River. It is believed that one of the earliest known ancient civilizations, the 'Indus-Saraswati civilization' prospered along the course of the Saraswati River. The demise of the Indus civilization is considered to be due to desiccation of the river. Today in the Sutlej-Yamuna interfluve, we observe an ephemeral river, known as Ghaggar. It is believed that along with the Ghaggar River, two other Himalayan Rivers Sutlej and Yamuna were tributaries of the lost Saraswati and made a significant contribution to its discharge. Presence of a large number of archaeological sites and the occurrence of thick fluvial sand bodies in the subsurface in the Sutlej-Yamuna interfluve has been used to suggest that the Saraswati River was a large perennial river. Further, the wider course of about 4-7 km recognized from satellite imagery of Ghaggar-Hakra belt in between Suratgarh and Anupgarh strengthens this hypothesis. Here we develop a methodology to estimate the paleo discharge and paleo width of the lost Saraswati River. In doing so, we rely on the hypothesis which suggests that the ancient Saraswati River used to carry the combined flow or some part of the Yamuna, Sutlej and Ghaggar catchments. We first established a regime relationship between the drainage area-channel width and catchment area-discharge of 29 different rivers presently flowing on the Himalayan Foreland from Indus in the west to the Brahmaputra in the East. We found the width and discharge of all the Himalayan rivers scale in a similar way when they are plotted against their corresponding catchment area. Using these regime curves, we calculate the width and discharge of paleochannels originating from the Sutlej, Yamuna and Ghaggar rivers by measuring their corresponding catchment area from satellite images. Finally, we add the discharge and width obtained from each of the individual catchments to estimate the paleo width and paleo discharge respectively of the Saraswati River. Our regime curves provide a first-order estimate of the paleo discharge of the lost Saraswati.

Keywords: Indus civilization, palaeochannel, regime curve, Saraswati River

Procedia PDF Downloads 155
435 Off-Line Text-Independent Arabic Writer Identification Using Optimum Codebooks

Authors: Ahmed Abdullah Ahmed

Abstract:

The task of recognizing the writer of a handwritten text has been an attractive research problem in the document analysis and recognition community with applications in handwriting forensics, paleography, document examination and handwriting recognition. This research presents an automatic method for writer recognition from digitized images of unconstrained writings. Although a great effort has been made by previous studies to come out with various methods, their performances, especially in terms of accuracy, are fallen short, and room for improvements is still wide open. The proposed technique employs optimal codebook based writer characterization where each writing sample is represented by a set of features computed from two codebooks, beginning and ending. Unlike most of the classical codebook based approaches which segment the writing into graphemes, this study is based on fragmenting a particular area of writing which are beginning and ending strokes. The proposed method starting with contour detection to extract significant information from the handwriting and the curve fragmentation is then employed to categorize the handwriting into Beginning and Ending zones into small fragments. The similar fragments of beginning strokes are grouped together to create Beginning cluster, and similarly, the ending strokes are grouped to create the ending cluster. These two clusters lead to the development of two codebooks (beginning and ending) by choosing the center of every similar fragments group. Writings under study are then represented by computing the probability of occurrence of codebook patterns. The probability distribution is used to characterize each writer. Two writings are then compared by computing distances between their respective probability distribution. The evaluations carried out on ICFHR standard dataset of 206 writers using Beginning and Ending codebooks separately. Finally, the Ending codebook achieved the highest identification rate of 98.23%, which is the best result so far on ICFHR dataset.

Keywords: off-line text-independent writer identification, feature extraction, codebook, fragments

Procedia PDF Downloads 483
434 Synthesis and Characterization of the Carbon Spheres Built Up from Reduced Graphene Oxide

Authors: Takahiro Saida, Takahiro Kogiso, Takahiro Maruyama

Abstract:

The ordered structural carbon (OSC) material is expected to apply to the electrode of secondary batteries, the catalyst supports, and the biomaterials because it shows the low substance-diffusion resistance by its uniform pore size. In general, the OSC material is synthesized using the template material. Changing size and shape of this template provides the pore size of OSC material according to the purpose. Depositing the oxide nanosheets on the polymer sphere template by the layer by layer (LbL) method was reported as one of the preparation methods of OSC material. The LbL method can provide the controlling thickness of structural wall without the surface modification. When the preparation of the uniform carbon sphere prepared by the LbL method which composed of the graphene oxide wall and the polymethyl-methacrylate (PMMA) core, the reduction treatment will be the important object. Since the graphene oxide has poor electron conductivity due to forming a lot of functional groups on the surface, it could be hard to apply to the electrode of secondary batteries and the catalyst support of fuel cells. In this study, the graphene oxide wall of carbon sphere was reduced by the thermal treatment under the vacuum conditions, and its crystalline structure and electronic state were characterized. Scanning electron microscope images of the carbon sphere after the heat treatment at 300ºC showed maintaining sphere shape, but its shape was collapsed with increasing the heating temperature. In this time, the dissolution rate of PMMA core and the reduction rate of graphene oxide were proportionate to heating temperature. In contrast, extending the heating time was conducive to the conservation of the sphere shape. From results of X-ray photoelectron spectroscopy analysis, its electronic state of the surface was indicated mainly sp² carbon. From the above results, we succeeded in the synthesis of the sphere structure composed by the reduction graphene oxide.

Keywords: carbon sphere, graphene oxide, reduction, layer by layer

Procedia PDF Downloads 118
433 Modification of Hexagonal Boron Nitride Induced by Focused Laser Beam

Authors: I. Wlasny, Z. Klusek, A. Wysmolek

Abstract:

Hexagonal boron nitride is a representative of a widely popular class of two-dimensional Van Der Waals materials. It finds its uses, among others, in construction of complexly layered heterostructures. Hexagonal boron nitride attracts great interest because of its properties characteristic for wide-gap semiconductors as well as an ultra-flat surface.Van Der Waals heterostructures composed of two-dimensional layered materials, such as transition metal dichalcogenides or graphene give hope for miniaturization of various electronic and optoelectronic elements. In our presentation, we will show the results of our investigations of the not previously reported modification of the hexagonal boron nitride layers with focused laser beam. The electrostatic force microscopy (EFM) images reveal that the irradiation leads to changes of the local electric fields for a wide range of laser wavelengths (from 442 to 785 nm). These changes are also accompanied by alterations of crystallographic structure of the material, as reflected by Raman spectra. They exhibit high stability and remain visible after at least five months. This behavior can be explained in terms of photoionization of the defect centers in h-BN which influence non-uniform electrostatic field screening by the photo-excited charge carriers. Analyzed changes influence local defect structure, and thus the interatomic distances within the lattice. These effects can be amplified by the piezoelectric character of hexagonal boron nitride, similar to that found in nitrides (e.g., GaN, AlN). Our results shed new light on the optical properties of the hexagonal boron nitride, in particular, those associated with electron-phonon coupling. Our study also opens new possibilities for h-BN applications in layered heterostructures where electrostatic fields can be used in tailoring of the local properties of the structures for use in micro- and nanoelectronics or field-controlled memory storage. This work is supported by National Science Centre project granted on the basis of the decision number DEC-2015/16/S/ST3/00451.

Keywords: atomic force microscopy, hexagonal boron nitride, optical properties, raman spectroscopy

Procedia PDF Downloads 147
432 Turkey in Minds: Cognitive and Social Representation of "East" and "West"

Authors: Feyzan Tuzkaya, Nihan S. Soylu, Caglar Solak, Mehmet Peker, Hilal Peker, Kemal Ozeralp, Ceren Mete, Ezgi Mehmetoglu, Mehmet Karasu, Cihan Elci, Ece Akca, Melek Goregenli

Abstract:

Perception, evaluation and representation of the environment have been the subject of many disciplines including psychology, geography and architecture. In environmental and social psychology literature there are several evidences which suggest that cognitive representations about a place consisted of not only geographic items but also social and cultural. Mental representations of residence area or a country is influenced and determined by social-demographics, the physical and social context. Thus, all mental representations of a given place are also social representations. Cognitive maps are the main and common instruments that are used to identify spatial images and the difference between physical and subjective environments. The aim of the current study is investigating the mental and social representations of Turkey in university students’ minds. Data was collected from 249 university students from different departments (i.e. psychology, geography, history, tourism departments) of Ege University. Participants were requested to reflect Turkey in their mind onto the paper drawing sketch maps. According to the results, cognitive maps showed geographic aspects of Turkey as well as the context of symbolic, cultural and political reality of Turkey. That is to say, these maps had many symbolic and verbal items related to critics on social and cultural problems, ongoing ethnic and political conflicts, and actual political agenda of Turkey. Additionally, one of main differentiations in these representations appeared in terms of the East and West side of the Turkey, and the representations of the East and West was varied correspondingly participants’ cultural background, their ethnic values, and where they have born. The results of the study were discussed in environmental and social psychological perspective considering cultural and social values of Turkey and current political circumstances of the country.

Keywords: cognitive maps, East, West, politics, social representations, Turkey

Procedia PDF Downloads 373
431 Nuclear Energy: The Reorientations of the French Public Perception

Authors: Aurélia Jandot

Abstract:

With the oil and economic crises which began in the 1970’s, it has progressively appeared necessary to convince the French “general public“ that a use of new energy sources was essential. In this field, nuclear energy represented the future and concentrated lots of hopes. However, the discourse about nuclear energy has progressively seen negative arguments growing in the French media. The gradual changes in the perception of nuclear energy will be studied here through the arguments given in the main French weekly newsmagazines, which had a great impact on the readers, thus on the “general public“, from the 1970’s to the end of the 1980’s. Indeed, to understand better these changes will be taken into account the major international events, the reorientations of the French domestic policy, and the evolutions of the nuclear technology. As this represents a considerable amount of copies and thus of information, will be selected here the main articles which emphasize the “mental images“ aiming to direct the thought of the readers, and which have led the public awareness and acceptance to evolve. From the 1970’s to the end of the 1980’s, two dichotomous trends are in confrontation : one is promoting the perception of the nuclear energy, the other is discrediting it. Moreover, these two trends are organized in two axes. The first axis is about the engineerings evolutions, such as the main French media represent them, with its approximations, its exaggerations, its fictions sometimes. Is added the will to make accessible to the “general public“ some concepts which are quite difficult to understand for the largest number. The second axis rests on the way the major accidents of the period are approached, including those of Three Mile Island and Chernobyl. Thanks to these accidents and because of the international relations evolutions, the ecologist movements and their impacts have progressively grown, with evident consequences on the public perception of nuclear energy and on the way the successive governments can implement new power plants in France. Then, in both cases, over the period considered, the language has changed, as the perceptible objectives of the communication, allowing to discern the deepest intentions of the newsmagazines editing. This is all these changes that will be emphasized, over a period where the nuclear energy technology, to there a field for specialists, bearing mystery and secret, has become a social issue seemingly open to all.

Keywords: social issues, public acceptance, mediatization, discourse changes

Procedia PDF Downloads 261
430 Assessment of Natural Flood Management Potential of Sheffield Lakeland to Flood Risks Using GIS: A Case Study of Selected Farms on the Upper Don Catchment

Authors: Samuel Olajide Babawale, Jonathan Bridge

Abstract:

Natural Flood Management (NFM) is promoted as part of sustainable flood management (SFM) in response to climate change adaptation. Stakeholder engagement is central to this approach, and current trends are progressively moving towards a collaborative learning approach where stakeholder participation is perceived as one of the indicators of sustainable development. Within this methodology, participation embraces a diversity of knowledge and values underpinned by a philosophy of empowerment, equity, trust, and learning. To identify barriers to NFM uptake, there is a need for a new understanding of how stakeholder participation could be enhanced to benefit individual and community resilience within SFM. This is crucial in light of climate change threats and scientific reliability concerns. In contributing to this new understanding, this research evaluated the proposed interventions on six (6) UK NFM in a catchment known as the Sheffield Lakeland Partnership Area with reference to the Environment Agency Working with Natural Processes (WWNP) Potentials/Opportunities. Three of the opportunities, namely Run-off Attenuation Potential of 1%, Run-off Attenuation Potential of 3.3% and Riparian Woodland Potential, were modeled. In all the models, the interventions, though they have been proposed or already in place, are not in agreement with the data presented by EA WWNP. Findings show some institutional weaknesses, which are seen to inhibit the development of adequate flood management solutions locally with damaging implications for vulnerable communities. The gap in communication from practitioners poses a challenge to the implementation of real flood mitigating measures that align with the lead agency’s nationally accepted measures which are identified as not feasible by the farm management officers within this context. Findings highlight a dominant top-bottom approach to management with very minimal indication of local interactions. Current WWNP opportunities have been termed as not realistic by the people directly involved in the daily management of the farms, with less emphasis on prevention and mitigation. The targeted approach suggested by the EA WWNP is set against adaptive flood management and community development. The study explores dimensions of participation using the self-reliance and self-help approach to develop a methodology that facilitates reflections of currently institutionalized practices and the need to reshape spaces of interactions to enable empowered and meaningful participation. Stakeholder engagement and resilience planning underpin this research. The findings of the study suggest different agencies have different perspectives on “community participation”. It also shows communities in the case study area appear to be least influential, denied a real chance of discussing their situations and influencing the decisions. This is against the background that the communities are in the most productive regions, contributing massively to national food supplies. The results are discussed concerning practical implications for addressing interagency partnerships and conducting grassroots collaborations that empower local communities and seek solutions to sustainable development challenges. This study takes a critical look into the challenges and progress made locally in sustainable flood risk management and adaptation to climate change by the United Kingdom towards achieving the global 2030 agenda for sustainable development.

Keywords: natural flood management, sustainable flood management, sustainable development, working with natural processes, environment agency, run-off attenuation potential, climate change

Procedia PDF Downloads 48
429 Improving Paper Mechanical Properties and Printing Quality by Using Carboxymethyl Cellulose as a Strength Agent

Authors: G. N. Simonian, R. F. Basalah, F. T. Abd El Halim, F. F. Abd El Latif, A. M. Adel, A. M. El Shafey.

Abstract:

Carboxymethyl cellulose (CMC) is an anionic water soluble polymer that has been introduced in paper coating as a strength agent. One of the main objectives of this research is to investigate the influence of CMC concentration in improving the strength properties of paper fiber. In this work, we coated the paper sheets; Xerox paper sheets by different concentration of carboxymethyl cellulose solution (0.1, 0.5, 1, 1.5, 2, 3%) w/v. The mechanical properties; breaking length and tearing resistance (tear factor) were measured for the treated and untreated paper specimens. The retained polymer in the coated paper samples were also calculated. The more the concentration of the coating material; CMC increases, the more the mechanical properties; breaking length and tear factor increases. It can be concluded that CMC enhance the improvement of the mechanical properties of paper sheets result in increasing paper stability. The aim of the present research was also to study the effects on the vessel element structure and vessel picking tendency of the coated paper sheets. In addition to the improved strength properties of the treated sheet, a significant decrease in the vessel picking tendency was expected whereas refining of the original paper sheets (untreated paper sheets) improved mainly the bonding ability of fibers, CMC effectively enhanced the bonding of vessels as well. Moreover, film structures were formed in the fibrillated areas of the coated paper specimens, and they were concluded to reinforce the bonding within the sheet. Also, fragmentation of vessel elements through CMC modification was found to be important and results in a decreasing picking tendency which reflects in a good printability. Moreover, Scanning – Electron Microscope (SEM) images are represented to specifically explain the improved bonding ability of vessels and fibers after CMC modification. Finally, CMC modification enhance paper mechanical properties and print quality.

Keywords: carboxymethyl cellulose (CMC), breaking length, tear factor, vessel picking, printing, concentration

Procedia PDF Downloads 391
428 Geospatial Assessments on Impacts of Land Use Changes and Climate Change in Nigeria Forest Ecosystems

Authors: Samuel O. Akande

Abstract:

The human-induced climate change is likely to have severe consequences on forest ecosystems in Nigeria. Recent discussions and emphasis on issues concerning the environment justify the need for this research which examined deforestation monitoring in Oban Forest, Nigeria using Remote Sensing techniques. The Landsat images from TM (1986), ETM+ (2001) and OLI (2015) sensors were obtained from Landsat online archive and processed using Erdas Imagine 2014 and ArcGIS 10.3 to obtain the land use/land cover and Normalized Differential Vegetative Index (NDVI) values. Ground control points of deforested areas were collected for validation. It was observed that the forest cover decreased in area by about 689.14 km² between 1986 and 2015. The NDVI was used to determine the vegetation health of the forest and its implications on agricultural sustainability. The result showed that the total percentage of the healthy forest cover has reduced to about 45.9% from 1986 to 2015. The results obtained from analysed questionnaires shown that there was a positive correlation between the causes and effects of deforestation in the study area. The coefficient of determination value was calculated as R² ≥ 0.7, to ascertain the level of anthropogenic activities, such as fuelwood harvesting, intensive farming, and logging, urbanization, and engineering construction activities, responsible for deforestation in the study area. Similarly, temperature and rainfall data were obtained from Nigerian Meteorological Agency (NIMET) for the period of 1986 to 2015 in the study area. It was observed that there was a significant increase in temperature while rainfall decreased over the study area. Responses from the administered questionnaires also showed that futile destruction of forest ecosystem in Oban forest could be reduced to its barest minimum if fuelwood harvesting is disallowed. Thus, the projected impacts of climate change on Nigeria’s forest ecosystems and environmental stability is better imagined than experienced.

Keywords: deforestation, ecosystems, normalized differential vegetative index, sustainability

Procedia PDF Downloads 164
427 A Methodological Approach to the Betterment of the Retail Store's Interior Design: The Example of Dereboyu Street, Nicosia

Authors: Nazanin Reza Nejad, Kamil Guley

Abstract:

Shopping is one of the most entertaining activities of daily life. In parallel to this, the successful settings of the stores impress the customers and made it more appealing for the users. The design of the atmosphere is the language of the interior space, and this design directly affects users’ emotions and perceptions. One of the goals of interior design is to increase the quality of the designed space. A well-designed venue satisfies the user and ensures happiness and safety. Thus, customers are turned into frequent users of the store. Spaces without the right designs negatively influence the user. The accurate interior design of the stores becomes crucial at this point. This study aims to act as a guideline for the betterment of the interior design of a newly designed or already existing clothing store located on the shopping streets of the cities. In light of the relevant literature review, the most important point in interior store design is the design and ambiance factors and how these factors are used in the interior space of the stores. Within the scope of this study, 27 clothing stores located on Dereboyu, the largest shopping street in Nicosia, the capital of North Cyprus, were examined. The examined stores were grouped as brand stores and non-brand stores which sell products from different production sites. The observation regarding the interiors of the selected stores was analyzed through qualitative and quantitative research methods. The arrangements of the sub-functions in the stores were analyzed through various reading methods over the plan schemes and recorded images. The sub-functions of all examined stores are compared against the ambiance and design factors in the literature, and results were interpreted accordingly. At the end of the study, the differences among stores that belong to a brand with an identity and stores which have not yet established an identity were identified and compared. The results of the comparisons were used to offer implications for the betterment of the interior design on a future or already existing store on the street. Thus, the study was concluded to be a guideline for people interested in interior store design.

Keywords: atmosphere, ambiance factors, clothing store, identity, interior design

Procedia PDF Downloads 166
426 Traumatic Spinal Cord Injury; Incidence, Prognosis and the Time-Course of Clinical Outcomes: A 12 Year Review from a Tertiary Hospital in Korea

Authors: Jeounghee Kim

Abstract:

Objective: To describe the incidence of complication, according to the stage of Traumatic Spinal Cord Injury (TSCI) which was treated at Asan Medical Center (AMC), Korea. Hereafter, it should be developed in nursing management protocol of traumatic SCI. Methods. Retrospectively reviewed hospital records about the patients who were admitted AMC Patients with traumatic spinal cord injury until January 2005 and December 2016 were analyzed (n=97). AMC is a single institution of 2,700 beds where patients with trauma and severe trauma can be treated. Patients who were admitted to the emergency room due to spinal cord injury and who underwent intensive care unit, general ward, and rehabilitation ward. To identify long-term complications, we excluded patients who were operated on to other hospitals after surgery. Complications such as respiratory(pneumonia, atelectasis, pulmonary embolism, and others), cardiovascular (hypotension), urinary (autonomic dysreflexia, urinary tract infection (UTI), neurogenic bladder, and others), and skin systems (pressure ulcers) from the time of admission were examined through medical records and images. Results: SCI was graded according to ASIA scale. The initial grade was checked at admission. (grade A 55(56.7%), grade B 14(14.4)%, grade C 11(11.3%), grade D 15(15.5%), and grade E 2(2.1%). The grade was rechecked when the patient was discharged after treatment. (grade A 43(44.3%), grade B 15(15.5%), grade C 12(12.4%), grade D 21(21.6%), and grade E 6(6.2%). The most common complication after SCI was UTI 24cases (mean 36.5day), sore 24cases (40.5day), and Pneumonia which was 23 cases after 10days averagely. The other complications after SCI were neuropathic pain 19 cases, surgical site infection 4 cases. 53.6% of patient who had SCI were educated about intermittent catheterization at discharge from hospital. The mean hospital stay of all SCI patients was 61days. Conclusion: The Complications after traumatic SCI were developed at various stages from acute phase to chronic phase. Nurses need to understand fully the time-course of complication in traumatic SCI to provide evidence-based practice.

Keywords: spinal cord injury, complication, nursing, rehabilitation

Procedia PDF Downloads 189
425 An Empirical Study of the Moderation Effects of Commitment, Trust, and Relationship Value in the Relation of Goods and Services Related to Business to Business Brand Images on Customer Loyalty

Authors: Jorge Luis Morales Romero, Enrique Murillo Othón

Abstract:

Business to business (B2B) relationships generally go beyond a purely profit-based result, with firms seeking to maintain a relationship for many years because a breakup or getting a new supplier can be very costly. Therefore, identifying the factors which determine a successful relationship in the long term is of great interest to companies. That is why their reputation and the brand image that customers have of them are among the main factors that can achieve a successful relationship; Because of the positive effect which is driven by the client’s loyalty. Additionally, the perception that a customer may have about a brand is different when it is related to goods or to services. Thereby, they create in their minds their own brand image of it based on the past experiences they have had; Thus, a positive relationship is established between goods-related brand image, service-related brand image, and customer loyalty. The present investigation examines the boundary conditions of said relationship by testing the moderating effects of trust, commitment, and relationship value in a B2B environment. All the variables were tested independently as moderators for service-related brand image/loyalty and for goods-related brand image/loyalty, as they are assumed to be separate variables. Survey data was collected through interviews with customers that have both a product-buying relationship and a service relationship with a global B2B brand of healthcare equipment operating in the Mexican healthcare market. Interviewed respondents were either the user or the purchasing manager and/or the responsible for the equipment maintenance for the customer organization. Hence, they were appropriate informants regarding the B2B relationship with this healthcare brand. The moderation models were estimated using the PROCESS macro for the Statistical Package for the Social Sciences Software (SPSS). Results show statistical evidence that both Relationship Value and Trust are significant moderators for the service-related brand image/loyalty relation but not significant for the goods-related brand/loyalty relation. On the other hand, Commitment results in a significant moderator for the goods-related brand/loyalty relation but is not significant for the service-related brand image/loyalty relation.

Keywords: commitment, trust, relationship value, loyalty, B2B, moderator

Procedia PDF Downloads 58
424 Numerical Analysis of Mandible Fracture Stabilization System

Authors: Piotr Wadolowski, Grzegorz Krzesinski, Piotr Gutowski

Abstract:

The aim of the presented work is to recognize the impact of mini-plate application approach on the stress and displacement within the stabilization devices and surrounding bones. The mini-plate osteosynthesis technique is widely used by craniofacial surgeons as an improved replacement of wire connection approach. Many different types of metal plates and screws are used to the physical connection of fractured bones. Below investigation is based on a clinical observation of patient hospitalized with mini-plate stabilization system. Analysis was conducted on a solid mandible geometry, which was modeled basis on the computed tomography scan of the hospitalized patient. In order to achieve most realistic connected system behavior, the cortical and cancellous bone layers were assumed. The temporomandibular joint was simplified to the elastic element to allow physiological movement of loaded bone. The muscles of mastication system were reduced to three pairs, modeled as shell structures. Finite element grid was created by the ANSYS software, where hexahedral and tetrahedral variants of SOLID185 element were used. A set of nonlinear contact conditions were applied on connecting devices and bone common surfaces. Properties of particular contact pair depend on screw - mini-plate connection type and possible gaps between fractured bone around osteosynthesis region. Some of the investigated cases contain prestress introduced to the mini-plate during the application, what responds the initial bending of the connecting device to fit the retromolar fossa region. Assumed bone fracture occurs within the mandible angle zone. Due to the significant deformation of the connecting plate in some of the assembly cases the elastic-plastic model of titanium alloy was assumed. The bone tissues were covered by the orthotropic material. As a loading were used the gauge force of magnitude of 100N applied in three different locations. Conducted analysis shows significant impact of mini-plate application methodology on the stress distribution within the miniplate. Prestress effect introduces additional loading, which leads to locally exceed the titanium alloy yield limit. Stress in surrounding bone increases rapidly around the screws application region, exceeding assumed bone yield limit, what indicate the local bone destruction. Approach with the doubled mini-plate shows increased stress within the connector due to the too rigid connection, where the main path of loading leads through the mini-plates instead of plates and connected bones. Clinical observations confirm more frequent plate destruction of stiffer connections. Some of them could be an effect of decreased low cyclic fatigue capability caused by the overloading. The executed analysis prove that the mini-plate system provides sufficient support to mandible fracture treatment, however, many applicable solutions shifts the entire system to the allowable material limits. The results show that connector application with the initial loading needs to be carefully established due to the small material capability tolerances. Comparison to the clinical observations allows optimizing entire connection to prevent future incidents.

Keywords: mandible fracture, mini-plate connection, numerical analysis, osteosynthesis

Procedia PDF Downloads 246
423 The Layout Analysis of Handwriting Characters and the Fusion of Multi-style Ancient Books’ Background

Authors: Yaolin Tian, Shanxiong Chen, Fujia Zhao, Xiaoyu Lin, Hailing Xiong

Abstract:

Ancient books are significant culture inheritors and their background textures convey the potential history information. However, multi-style texture recovery of ancient books has received little attention. Restricted by insufficient ancient textures and complex handling process, the generation of ancient textures confronts with new challenges. For instance, training without sufficient data usually brings about overfitting or mode collapse, so some of the outputs are prone to be fake. Recently, image generation and style transfer based on deep learning are widely applied in computer vision. Breakthroughs within the field make it possible to conduct research upon multi-style texture recovery of ancient books. Under the circumstances, we proposed a network of layout analysis and image fusion system. Firstly, we trained models by using Deep Convolution Generative against Networks (DCGAN) to synthesize multi-style ancient textures; then, we analyzed layouts based on the Position Rearrangement (PR) algorithm that we proposed to adjust the layout structure of foreground content; at last, we realized our goal by fusing rearranged foreground texts and generated background. In experiments, diversified samples such as ancient Yi, Jurchen, Seal were selected as our training sets. Then, the performances of different fine-turning models were gradually improved by adjusting DCGAN model in parameters as well as structures. In order to evaluate the results scientifically, cross entropy loss function and Fréchet Inception Distance (FID) are selected to be our assessment criteria. Eventually, we got model M8 with lowest FID score. Compared with DCGAN model proposed by Radford at el., the FID score of M8 improved by 19.26%, enhancing the quality of the synthetic images profoundly.

Keywords: deep learning, image fusion, image generation, layout analysis

Procedia PDF Downloads 111
422 Electroencephalography Correlates of Memorability While Viewing Advertising Content

Authors: Victor N. Anisimov, Igor E. Serov, Ksenia M. Kolkova, Natalia V. Galkina

Abstract:

The problem of memorability of the advertising content is closely connected with the key issues of neuromarketing. The memorability of the advertising content contributes to the marketing effectiveness of the promoted product. Significant directions of studying the phenomenon of memorability are the memorability of the brand (detected through the memorability of the logo) and the memorability of the product offer (detected through the memorization of dynamic audiovisual advertising content - commercial). The aim of this work is to reveal the predictors of memorization of static and dynamic audiovisual stimuli (logos and commercials). An important direction of the research was revealing differences in psychophysiological correlates of memorability between static and dynamic audiovisual stimuli. We assumed that static and dynamic images are perceived in different ways and may have a difference in the memorization process. Objective methods of recording psychophysiological parameters while watching static and dynamic audiovisual materials are well suited to achieve the aim. The electroencephalography (EEG) method was performed with the aim of identifying correlates of the memorability of various stimuli in the electrical activity of the cerebral cortex. All stimuli (in the groups of statics and dynamics separately) were divided into 2 groups – remembered and not remembered based on the results of the questioning method. The questionnaires were filled out by survey participants after viewing the stimuli not immediately, but after a time interval (for detecting stimuli recorded through long-term memorization). Using statistical method, we developed the classifier (statistical model) that predicts which group (remembered or not remembered) stimuli gets, based on psychophysiological perception. The result of the statistical model was compared with the results of the questionnaire. Conclusions: Predictors of the memorability of static and dynamic stimuli have been identified, which allows prediction of which stimuli will have a higher probability of remembering. Further developments of this study will be the creation of stimulus memory model with the possibility of recognizing the stimulus as previously seen or new. Thus, in the process of remembering the stimulus, it is planned to take into account the stimulus recognition factor, which is one of the most important tasks for neuromarketing.

Keywords: memory, commercials, neuromarketing, EEG, branding

Procedia PDF Downloads 227
421 Gold-Bearing Alteration Zones in South Eastern Desert of Egypt: Geology and Remote Sensing Analysis

Authors: Mohamed F. Sadek, Safaa M. Hassan, Safwat S. Gabr

Abstract:

Several alteration zones hosting gold mineralization are wide spreading in the South Eastern Desert of Egypt where gold has been mined from many localities since the time of the Pharaohs. The Sukkari is the only mine currently producing gold in the Eastern Desert of Egypt. Therefore, it is necessary to conduct more detailed studies on these locations using modern exploratory methods. The remote sensing plays an important role in lithological mapping and detection of associated hydrothermal mineralization particularly the exploration of gold mineralization. This study is focused on three localities in South Eastern Desert of Egypt, namely Beida, Defiet and Hoteib-Eiqat aiming to detect the gold-bearing hydrothermal alteration zones using the integrated data of remote sensing, field study and mineralogical investigation. Generally, these areas are dominated by Precambrian basement rocks including metamorphic and magmatic assemblages. They comprise ophiolitic serpentinite-talc carbonate, island-arc metavolcanics which were intruded by syn to late orogenic mafic and felsic intrusions mainly gabbro, granodiorite and monzogranite. The processed data of Advanced Spaceborne Thermal Emission and Reflection (ASTER) and Landsat-8 images are used in the present study to map the gold bearing-hydrothermal alteration zones. Band rationing and principal component analysis techniques are used to discriminate the different lithologic units exposed in the studied three areas. Field study and mineralogical investigation have been used to verify the remote sensing data. This study concluded that, the integrated remote sensing data with geological, field and mineralogical investigations are very effective in lithological discrimination, detailed geological mapping and detection of the gold-bearing hydrothermal alteration zones. More detailed exploration for gold mineralization with the help of remote sensing techniques is recommended to evaluate its potentiality in the study areas.

Keywords: pan-african, Egypt, landsat-8; ASTER, gold, alteration zones

Procedia PDF Downloads 96