Search results for: link data
21236 The Hallmarks of War Propaganda: The Case of Russia-Ukraine Conflict
Authors: Veronika Solopova, Oana-Iuliana Popescu, Tim Landgraf, Christoph Benzmüller
Abstract:
Beginning in 2014, slowly building geopolitical tensions in Eastern Europe led to a full-blown conflict between the Russian Federation and Ukraine that generated an unprecedented amount of news articles and data from social media data, reflecting the opposing ideologies and narratives as a background and the essence of the ongoing war. These polarized informational campaigns have led to countless mutual accusations of misinformation and fake news, shaping an atmosphere of confusion and mistrust for many readers all over the world. In this study, we analyzed scraped news articles from Ukrainian, Russian, Romanian and English-speaking news outlets, on the eve of 24th of February 2022, compared to day five of the conflict (28th of February), to see how the media influenced and mirrored the changes in public opinion. We also contrast the sources opposing and supporting the stands of the Russian government in Ukrainian, Russian and Romanian media spaces. In a data-driven way, we describe how the narratives are spread throughout Eastern and Central Europe. We present predictive linguistic features surrounding war propaganda. Our results indicate that there are strong similarities in terms of rhetoric strategies in the pro-Kremlin media in both Ukraine and Russia, which, while being relatively neutral according to surface structure, use aggressive vocabulary. This suggests that automatic propaganda identification systems have to be tailored for each new case, as they have to rely on situationally specific words. Both Ukrainian and Russian outlets lean towards strongly opinionated news, pointing towards the use of war propaganda in order to achieve strategic goals.Keywords: linguistic, news, propaganda, Russia, ukraine
Procedia PDF Downloads 12421235 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches
Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli
Abstract:
Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR
Procedia PDF Downloads 11321234 Theoretical Framework for Value Creation in Project Oriented Companies
Authors: Mariusz Hofman
Abstract:
The paper ‘Theoretical framework for value creation in Project-Oriented Companies’ is designed to determine, how organisations create value and whether this allows them to achieve market success. An assumption has been made that there are two routes to achieving this value. The first one is to create intangible assets (i.e. the resources of human, structural and relational capital), while the other one is to create added value (understood as the surplus of revenue over costs). It has also been assumed that the combination of the achieved added value and unique intangible assets translates to the success of a project-oriented company. The purpose of the paper is to present hypothetical and deductive model which describing the modus operandi of such companies and approach to model operationalisation. All the latent variables included in the model are theoretical constructs with observational indicators (measures). The existence of a latent variable (construct) and also submodels will be confirmed based on a covariance matrix which in turn is based on empirical data, being a set of observational indicators (measures). This will be achieved with a confirmatory factor analysis (CFA). Due to this statistical procedure, it will be verified whether the matrix arising from the adopted theoretical model differs statistically from the empirical matrix of covariance arising from the system of equations. The fit of the model with the empirical data will be evaluated using χ2, RMSEA and CFI (Comparative Fit Index). How well the theoretical model fits the empirical data is assessed through a number of indicators. If the theoretical conjectures are confirmed, an interesting development path can be defined for project-oriented companies. This will let such organisations perform efficiently in the face of the growing competition and pressure on innovation.Keywords: value creation, project-oriented company, structural equation modelling
Procedia PDF Downloads 30221233 Fashion Consumption for Fashion Innovators: A Study of Fashion Consumption Behavior of Innovators and Non-Innovators
Authors: Vaishali P. Joshi, Pallav Joshi
Abstract:
The objective of this study is to examine the differences fashion innovators and non-fashion innovators in their fashion consumption behavior in terms of their pre-purchase behavior, purchase behavior and post purchase behavior. The questionnaire was distributed to a female college student for data collection for achieving the objective of the first part of the study. Question-related to fashion innovativeness and fashion consumption behavior was asked. The sample was comprised of 81 college females ages 18 through 30 who were attending Business Management degree. A series of attitude questions was used to categorize respondents on the Innovativeness Scale. 32 respondents with a score of 21 and above were designated as Fashion innovators and the remainder (49) as Non-fashion innovators. Findings showed that there exist significant differences between innovators and non-innovators in their fashion consumption behavior. Data was analyzed through frequency distribution table. Many differences were found in the behavior of innovators and non-innovators in terms of their pre-purchase, actual purchase, and post-purchase behavior.Keywords: fashion, innovativeness, consumption behavior, purchase
Procedia PDF Downloads 56321232 Looking for a Connection between Oceanic Regions with Trends in Evaporation with Continental Ones with Trends in Precipitation through a Lagrangian Approach
Authors: Raquel Nieto, Marta Vázquez, Anita Drumond, Luis Gimeno
Abstract:
One of the hot spots of climate change is the increment of ocean evaporation. The best estimation of evaporation, OAFlux data, shows strong increasing trends in evaporation from the oceans since 1978, with peaks during the hemispheric winter and strongest along the paths of the global western boundary currents and any inner Seas. The transport of moisture from oceanic sources to the continents is the connection between evaporation from the ocean and precipitation over the continents. A key question is to try to relate evaporative source regions over the oceans where trends have occurred in the last decades with their sinks over the continents to check if there have been also any trends in the precipitation amount or its characteristics. A Lagrangian approach based on FLEXPART and ERA-interim data is used to establish this connection. The analyzed period was 1980 to 2012. Results show that there is not a general pattern, but a significant agreement was found in important areas of climate interest.Keywords: ocean evaporation, Lagrangian approaches, contiental precipitation, Europe
Procedia PDF Downloads 26021231 Tools for Analysis and Optimization of Standalone Green Microgrids
Authors: William Anderson, Kyle Kobold, Oleg Yakimenko
Abstract:
Green microgrids using mostly renewable energy (RE) for generation, are complex systems with inherent nonlinear dynamics. Among a variety of different optimization tools there are only a few ones that adequately consider this complexity. This paper evaluates applicability of two somewhat similar optimization tools tailored for standalone RE microgrids and also assesses a machine learning tool for performance prediction that can enhance the reliability of any chosen optimization tool. It shows that one of these microgrid optimization tools has certain advantages over another and presents a detailed routine of preparing input data to simulate RE microgrid behavior. The paper also shows how neural-network-based predictive modeling can be used to validate and forecast solar power generation based on weather time series data, which improves the overall quality of standalone RE microgrid analysis.Keywords: microgrid, renewable energy, complex systems, optimization, predictive modeling, neural networks
Procedia PDF Downloads 29221230 A Wall Law for Two-Phase Turbulent Boundary Layers
Authors: Dhahri Maher, Aouinet Hana
Abstract:
The presence of bubbles in the boundary layer introduces corrections into the log law, which must be taken into account. In this work, a logarithmic wall law was presented for bubbly two phase flows. The wall law presented in this work was based on the postulation of additional turbulent viscosity associated with bubble wakes in the boundary layer. The presented wall law contained empirical constant accounting both for shear induced turbulence interaction and for non-linearity of bubble. This constant was deduced from experimental data. The wall friction prediction achieved with the wall law was compared to the experimental data, in the case of a turbulent boundary layer developing on a vertical flat plate in the presence of millimetric bubbles. A very good agreement between experimental and numerical wall friction prediction was verified. The agreement was especially noticeable for the low void fraction when bubble induced turbulence plays a significant role.Keywords: bubbly flows, log law, boundary layer, CFD
Procedia PDF Downloads 28121229 Improving Working Memory in School Children through Chess Training
Authors: Veena Easvaradoss, Ebenezer Joseph, Sumathi Chandrasekaran, Sweta Jain, Aparna Anna Mathai, Senta Christy
Abstract:
Working memory refers to a cognitive processing space where information is received, managed, transformed, and briefly stored. It is an operational process of transforming information for the execution of cognitive tasks in different and new ways. Many class room activities require children to remember information and mentally manipulate it. While the impact of chess training on intelligence and academic performance has been unequivocally established, its impact on working memory needs to be studied. This study, funded by the Cognitive Science Research Initiative, Department of Science & Technology, Government of India, analyzed the effect of one-year chess training on the working memory of children. A pretest–posttest with control group design was used, with 52 children in the experimental group and 50 children in the control group. The sample was selected from children studying in school (grades 3 to 9), which included both the genders. The experimental group underwent weekly chess training for one year, while the control group was involved in extracurricular activities. Working memory was measured by two subtests of WISC-IV INDIA. The Digit Span Subtest involves recalling a list of numbers of increasing length presented orally in forward and in reverse order, and the Letter–Number Sequencing Subtest involves rearranging jumbled alphabets and numbers presented orally following a given rule. Both tasks require the child to receive and briefly store information, manipulate it, and present it in a changed format. The Children were trained using Winning Moves curriculum, audio- visual learning method, hands-on- chess training and recording the games using score sheets, analyze their mistakes, thereby increasing their Meta-Analytical abilities. They were also trained in Opening theory, Checkmating techniques, End-game theory and Tactical principles. Pre equivalence of means was established. Analysis revealed that the experimental group had significant gains in working memory compared to the control group. The present study clearly establishes a link between chess training and working memory. The transfer of chess training to the improvement of working memory could be attributed to the fact that while playing chess, children evaluate positions, visualize new positions in their mind, analyze the pros and cons of each move, and choose moves based on the information stored in their mind. If working-memory’s capacity could be expanded or made to function more efficiently, it could result in the improvement of executive functions as well as the scholastic performance of the child.Keywords: chess training, cognitive development, executive functions, school children, working memory
Procedia PDF Downloads 26921228 Secure Intelligent Information Management by Using a Framework of Virtual Phones-On Cloud Computation
Authors: Mohammad Hadi Khorashadi Zadeh
Abstract:
Many new applications and internet services have been emerged since the innovation of mobile networks and devices. However, these applications have problems of security, management, and performance in business environments. Cloud systems provide information transfer, management facilities, and security for virtual environments. Therefore, an innovative internet service and a business model are proposed in the present study for creating a secure and consolidated environment for managing the mobile information of organizations based on cloud virtual phones (CVP) infrastructures. Using this method, users can run Android and web applications in the cloud which enhance performance by connecting to other CVP users and increases privacy. It is possible to combine the CVP with distributed protocols and central control which mimics the behavior of human societies. This mix helps in dealing with sensitive data in mobile devices and facilitates data management with less application overhead.Keywords: BYOD, mobile cloud computing, mobile security, information management
Procedia PDF Downloads 32121227 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 13421226 Improving Road Infrastructure Safety Management Through Statistical Analysis of Road Accident Data. Case Study: Streets in Bucharest
Authors: Dimitriu Corneliu-Ioan, Gheorghe FrațIlă
Abstract:
Romania has one of the highest rates of road deaths among European Union Member States, and there is a concern that the country will not meet its goal of "zero deaths" by 2050. The European Union also aims to halve the number of people seriously injured in road accidents by 2030. Therefore, there is a need to improve road infrastructure safety management in Romania. The aim of this study is to analyze road accident data through statistical methods to assess the current state of road infrastructure safety in Bucharest. The study also aims to identify trends and make forecasts regarding serious road accidents and their consequences. The objective is to provide insights that can help prioritize measures to increase road safety, particularly in urban areas. The research utilizes statistical analysis methods, including exploratory analysis and descriptive statistics. Databases from the Traffic Police and the Romanian Road Authority are analyzed using Excel. Road risks are compared with the main causes of road accidents to identify correlations. The study emphasizes the need for better quality and more diverse collection of road accident data for effective analysis in the field of road infrastructure engineering. The research findings highlight the importance of prioritizing measures to improve road safety in urban areas, where serious accidents and their consequences are more frequent. There is a correlation between the measures ordered by road safety auditors and the main causes of serious accidents in Bucharest. The study also reveals the significant social costs of road accidents, amounting to approximately 3% of GDP, emphasizing the need for collaboration between local and central administrations in allocating resources for road safety. This research contributes to a clearer understanding of the current road infrastructure safety situation in Romania. The findings provide critical insights that can aid decision-makers in allocating resources efficiently and institutionally cooperating to achieve sustainable road safety. The data used for this study are collected from the Traffic Police and the Romanian Road Authority. The data processing involves exploratory analysis and descriptive statistics using the Excel tool. The analysis allows for a better understanding of the factors contributing to the current road safety situation and helps inform managerial decisions to eliminate or reduce road risks. The study addresses the state of road infrastructure safety in Bucharest and analyzes the trends and forecasts regarding serious road accidents and their consequences. It studies the correlation between road safety measures and the main causes of serious accidents. To improve road safety, cooperation between local and central administrations towards joint financial efforts is important. This research highlights the need for statistical data processing methods to substantiate managerial decisions in road infrastructure management. It emphasizes the importance of improving the quality and diversity of road accident data collection. The research findings provide a critical perspective on the current road safety situation in Romania and offer insights to identify appropriate solutions to reduce the number of serious road accidents in the future.Keywords: road death rate, strategic objective, serious road accidents, road safety, statistical analysis
Procedia PDF Downloads 9021225 Rotterdam in Transition: A Design Case for a Low-Carbon Transport Node in Lombardijen
Authors: Halina Veloso e Zarate, Manuela Triggianese
Abstract:
The urban challenges posed by rapid population growth, climate adaptation, and sustainable living have compelled Dutch cities to reimagine their built environment and transportation systems. As a pivotal contributor to CO₂ emissions, the transportation sector in the Netherlands demands innovative solutions for transitioning to low-carbon mobility. This study investigates the potential of transit oriented development (TOD) as a strategy for achieving carbon reduction and sustainable urban transformation. Focusing on the Lombardijen station area in Rotterdam, which is targeted for significant densification, this paper presents a design-oriented exploration of a low-carbon transport node. By employing a research-by-design methodology, this study delves into multifaceted factors and scales, aiming to propose future scenarios for Lombardijen. Drawing from a synthesis of existing literature, applied research, and practical insights, a robust design framework emerges. To inform this framework, governmental data concerning the built environment and material embodied carbon are harnessed. However, the restricted access to crucial datasets, such as property ownership information from the cadastre and embodied carbon data from De Nationale Milieudatabase, underscores the need for improved data accessibility, especially during the concept design phase. The findings of this research contribute fundamental insights not only to the Lombardijen case but also to TOD studies across Rotterdam's 13 nodes and similar global contexts. Spatial data related to property ownership facilitated the identification of potential densification sites, underscoring its importance for informed urban design decisions. Additionally, the paper highlights the disparity between the essential role of embodied carbon data in environmental assessments for building permits and its limited accessibility due to proprietary barriers. Although this study lays the groundwork for sustainable urbanization through TOD-based design, it acknowledges an area of future research worthy of exploration: the socio-economic dimension. Given the complex socio-economic challenges inherent in the Lombardijen area, extending beyond spatial constraints, a comprehensive approach demands integration of mobility infrastructure expansion, land-use diversification, programmatic enhancements, and climate adaptation. While the paper adopts a TOD lens, it refrains from an in-depth examination of issues concerning equity and inclusivity, opening doors for subsequent research to address these aspects crucial for holistic urban development.Keywords: Rotterdam zuid, transport oriented development, carbon emissions, low-carbon design, cross-scale design, data-supported design
Procedia PDF Downloads 9121224 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 38221223 Information Exchange Process Analysis between Authoring Design Tools and Lighting Simulation Tools
Authors: Rudan Xue, Annika Moscati, Rehel Zeleke Kebede, Peter Johansson
Abstract:
Successful buildings’ simulation and analysis inevitably require information exchange between multiple building information modeling (BIM) software. The BIM infor-mation exchange based on IFC is widely used. However, Industry Foundation Classifi-cation (IFC) files are not always reliable and information can get lost when using dif-ferent software for modeling and simulations. In this research, interviews with lighting simulation experts and a case study provided by a company producing lighting devices have been the research methods used to identify the necessary steps and data for suc-cessful information exchange between lighting simulation tools and authoring design tools. Model creation, information exchange, and model simulation have been identi-fied as key aspects for the success of information exchange. The paper concludes with recommendations for improved information exchange and more reliable simulations that take all the needed parameters into consideration.Keywords: BIM, data exchange, interoperability issues, lighting simulations
Procedia PDF Downloads 25021222 Climate Change and Sustainable Development among Agricultural Communities in Tanzania; An Analysis of Southern Highland Rural Communities
Authors: Paschal Arsein Mugabe
Abstract:
This paper examines sustainable development planning in the context of environmental concerns in rural areas of the Tanzania. It challenges mainstream approaches to development, focusing instead upon transformative action for environmental justice. The goal is to help shape future sustainable development agendas in local government, international agencies and civil society organisations. Research methods: The approach of the study is geographical, but also involves various Trans-disciplinary elements, particularly from development studies, sociology and anthropology, management, geography, agriculture and environmental science. The research methods included thematic and questionnaire interviews, participatory tools such as focus group discussion, participatory research appraisal and expert interviews for primary data. Secondary data were gathered through the analysis of land use/cover data and official documents on climate, agriculture, marketing and health. Also several earlier studies that were made in the area provided an important reference base. Findings: The findings show that, agricultural sustainability in Tanzania appears likely to deteriorate as a consequence of climate change. Noteworthy differences in impacts across households are also present both by district and by income category. Also food security cannot be explained by climate as the only influencing factor. A combination of economic, political and socio-cultural context of the community are crucial. Conclusively, it is worthy knowing that people understand their relationship between climate change and their livelihood.Keywords: agriculture, climate change, environment, sustainable development
Procedia PDF Downloads 32621221 Gravity and Magnetic Survey, Modeling and Interpretation in the Blötberget Iron-Oxide Mining Area of Central Sweden
Authors: Ezra Yehuwalashet, Alireza Malehmir
Abstract:
Blötberget mining area in central Sweden, part of the Bergslagen mineral district, is well known for its various type of mineralization particularly iron-oxide deposits since the 1600. To shed lights on the knowledge of the host rock structures, depth extent and tonnage of the mineral deposits and support deep mineral exploration potential in the study area, new ground gravity and existing aeromagnetic data (from the Geological Survey of Sweden) were used for interpretations and modelling. A major boundary separating a gravity low from a gravity high in the southern part of the study area is noticeable and likely representing a fault boundary separating two different lithological units. Gravity data and modeling offers a possible new target area in the southeast of the known mineralization while suggesting an excess high-density region down to 800 m depth.Keywords: gravity, magnetics, ore deposit, geophysics
Procedia PDF Downloads 6921220 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)
Authors: Silvia Arrate, Waldo Salud, Eloy París
Abstract:
The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.Keywords: cutting tools, data science, prediction, TBM, wear
Procedia PDF Downloads 5321219 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 13821218 The History and Pattern of Migration from Punjab to West: Colonial to Global Punjab
Authors: Malkit Singh
Abstract:
This paper presents an in-depth analysis of the problem of migration from Punjab to the West while analyzing the history and patterns of generations of migration of Punjabis to the West. A special emphasis is given to link the present socio-economic and political crisis with the historical pattern of Punjabis’ migration to the West from colonial India to Independent Bharat, along with the stories of the success and failures of Western aspirants’ youth from Punjab. The roots of the migration from Punjab to the West have been traced from the invasion of the British in Punjab, resulting in the socio-economic and political dismantling of the Punjabi society, which resulted in the migration of the Punjabis to the other colonies of the British Empire. The grim position at home despite of all the efforts and hard work by the majority of the Punjabis, particularly from the farmer community and the shining lifestyle of some families of the village or vicinity who have some relatives in the West have encouraged the large number of Punjabis to change their fortune by working in West. However, the Visa and Work Permit regime has closed the doors of the West for those who are unskilled, semi-skilled and not qualified for the visa and work permit norms, but their inspiration to change their fortune by working abroad at any cost has resulted into the development of big business fraud of immigration agent and firms in Punjab that resulted into the loss of the thousands lives, imprisonment in the foreign and selling of the properties of the Punjabis. The greed for the greener pastures in the West and, the plight of the deserted wives of NRIs and the illegal routes adopted by the Punjabi youth due to the non-availability of visas and work permits are dealt in a comprehensive method. The rise and fall of Punjab as a land of the breadbasket of Bharat and the marginalization of the farmers with middle and small holdings due to the capital-intensive techniques are linked with the forced migration of the Punjabis. The failure of the government to address and respond to the rampant corruption, agriculture failure and the resulting problems of law and order before and after the troubled period of militancy in Punjab and the resulting migration to the West are comprehensively covered. The new trend of the Student Visa and Study abroad, particularly in Canada, Australia, and New Zealand, despite of the availability of quality education at very low cost in India. The early success of some students in getting study visas from Australia, Canada, New Zealand etc. and getting permanent immigration to these countries have encouraged the majority of Punjabi youth to leave their motherland for better opportunities in the prosperous lands, that is, again, failed as these countries are flooded with the Punjabi students. Moreover, the total failure of the political leadership of Punjab to address the basic needs of society, like law and order and stop the drug menace issues in the post-militancy Punjab is also done to understand the problem.Keywords: Punjab, migration, West, agriculture
Procedia PDF Downloads 7121217 The Influence of Guided and Independent Training Toward Teachers’ Competence to Plan Early Childhood Education Learning Program
Authors: Sofia Hartati
Abstract:
This research is aimed at describing training in early childhood education program empirically, describing teachers ability to plan lessons empirically, and acquiring empirical data as well as analyzing the influence of guided and independent training toward teachers competence in planning early childhood learning program. The method used is an experiment. It collected data with a population of 76 early childhood educators in Tunjung Teja Sub District area through random sampling technique and grouped into two namely 38 people in an experiment class and 38 people in a controlled class. The technique used for data collections is a test. The result of the research shows that there is a significant influence between training for guided educators toward Teachers Ability toward Planning Early Childhood Learning Program. Guided training has been proven to improve the ability to comprehend planning a learning program. The ability to comprehend planning a learning program owned by teachers of early childhood program comprises of 1) determining the characteristics and competence of students prior to learning; 2) formulating the objective of the learning; 3) selecting materials and its sequences; 4) selecting teaching methods; 5) determining the means or learning media; 6) selecting evaluation strategy as a part of teachers pedagogic competence. The result of this research describes a difference in the competence level of teachers who have joined guided training which is relatively higher than the teachers who joined the independent training. Guided training is one of an effective way to improve the knowledge and competence of early childhood educators.Keywords: competence, planning, teachers, training
Procedia PDF Downloads 26921216 Targeting Glucocorticoid Receptor Eliminate Dormant Chemoresistant Cancer Stem Cells in Glioblastoma
Authors: Aoxue Yang, Weili Tian, Haikun Liu
Abstract:
Brain tumor stem cells (BTSCs) are resistant to therapy and give rise to recurrent tumors. These rare and elusive cells are likely to disseminate during cancer progression, and some may enter dormancy, remaining viable but not increasing. The identification of dormant BTSCs is thus necessary to design effective therapies for glioblastoma (GBM) patients. Glucocorticoids (GCs) are used to treat GBM-associated edema. However, glucocorticoids participate in the physiological response to psychosocial stress, linked to poor cancer prognosis. This raises concern that glucocorticoids affect the tumor and BTSCs. Identifying markers specifically expressed by brain tumor stem cells (BTSCs) may enable specific therapies that spare their regular tissue-resident counterparts. By ribosome profiling analysis, we have identified that glycerol-3-phosphate dehydrogenase 1 (GPD1) is expressed by dormant BTSCs but not by NSCs. Through different stress-induced experiments in vitro, we found that only dexamethasone (DEXA) can significantly increase the expression of GPD1 in NSCs. Adversely, mifepristone (MIFE) which is classified as glucocorticoid receptors antagonists, could decrease GPD1 protein level and weaken the proliferation and stemness in BTSCs. Furthermore, DEXA can induce GPD1 expression in tumor-bearing mice brains and shorten animal survival, whereas MIFE has a distinct adverse effect that prolonged mice lifespan. Knocking out GR in NSC can block the upregulation of GPD1 inducing by DEXA, and we find the specific sequences on GPD1 promotor combined with GR, thus improving the efficiency of GPD1 transcription from CHIP-Seq. Moreover, GR and GPD1 are highly co-stained on GBM sections obtained from patients and mice. All these findings confirmed that GR could regulate GPD1 and loss of GPD1 Impairs Multiple Pathways Important for BTSCs Maintenance GPD1 is also a critical enzyme regulating glycolysis and lipid synthesis. We observed that DEXA and MIFE could change the metabolic profiles of BTSCs by regulating GPD1 to shift the transition of cell dormancy. Our transcriptome and lipidomics analysis demonstrated that cell cycle signaling and phosphoglycerides synthesis pathways contributed a lot to the inhibition of GPD1 caused by MIFE. In conclusion, our findings raise concern that treatment of GBM with GCs may compromise the efficacy of chemotherapy and contribute to BTSC dormancy. Inhibition of GR can dramatically reduce GPD1 and extend the survival duration of GBM-bearing mice. The molecular link between GPD1 and GR may give us an attractive therapeutic target for glioblastoma.Keywords: cancer stem cell, dormancy, glioblastoma, glycerol-3-phosphate dehydrogenase 1, glucocorticoid receptor, dexamethasone, RNA-sequencing, phosphoglycerides
Procedia PDF Downloads 13621215 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"
Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo
Abstract:
The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH
Procedia PDF Downloads 26921214 A Study on Automotive Attack Database and Data Flow Diagram for Concretization of HEAVENS: A Car Security Model
Authors: Se-Han Lee, Kwang-Woo Go, Gwang-Hyun Ahn, Hee-Sung Park, Cheol-Kyu Han, Jun-Bo Shim, Geun-Chul Kang, Hyun-Jung Lee
Abstract:
In recent years, with the advent of smart cars and the expansion of the market, the announcement of 'Adventures in Automotive Networks and Control Units' at the DEFCON21 conference in 2013 revealed that cars are not safe from hacking. As a result, the HEAVENS model considering not only the functional safety of the vehicle but also the security has been suggested. However, the HEAVENS model only presents a simple process, and there are no detailed procedures and activities for each process, making it difficult to apply it to the actual vehicle security vulnerability check. In this paper, we propose an automated attack database that systematically summarizes attack vectors, attack types, and vulnerable vehicle models to prepare for various car hacking attacks, and data flow diagrams that can detect various vulnerabilities and suggest a way to materialize the HEAVENS model.Keywords: automotive security, HEAVENS, car hacking, security model, information security
Procedia PDF Downloads 36921213 Relationship between Growth of Non-Performing Assets and Credit Risk Management Practices in Indian Banks
Authors: Sirus Sharifi, Arunima Haldar, S. V. D. Nageswara Rao
Abstract:
The study attempts to analyze the impact of credit risk management practices of Indian scheduled commercial banks on their non-performing assets (NPAs). The data on credit risk practices was collected by administering a questionnaire to risk managers/executives at different banks. The data on NPAs (from 2012 to 2016) is sourced from Prowess, a database compiled by the Centre for Monitoring Indian Economy (CMIE). The model was estimated using cross-sectional regression method. As expected, the findings suggest that there is a negative relationship between credit risk management and NPA growth in Indian banks. The study has implications for Indian banks given the high level of losses, and the implementation of Basel III norms by the central bank, i.e. Reserve Bank of India (RBI). Evidence on credit risk management in Indian banks, and their relationship with non-performing assets held by them.Keywords: credit risk, identification, Indian Banks, NPAs, ownership
Procedia PDF Downloads 41721212 Optimization of Reaction Parameters' Influences on Production of Bio-Oil from Fast Pyrolysis of Oil Palm Empty Fruit Bunch Biomass in a Fluidized Bed Reactor
Authors: Chayanoot Sangwichien, Taweesak Reungpeerakul, Kyaw Thu
Abstract:
Oil palm mills in Southern Thailand produced a large amount of biomass solid wastes. Lignocellulose biomass is the main source for production of biofuel which can be combined or used as an alternative to fossil fuels. Biomass composed of three main constituents of cellulose, hemicellulose, and lignin. Thermochemical conversion process applied to produce biofuel from biomass. Pyrolysis of biomass is the best way to thermochemical conversion of biomass into pyrolytic products (bio-oil, gas, and char). Operating parameters play an important role to optimize the product yields from fast pyrolysis of biomass. This present work concerns with the modeling of reaction kinetics parameters for fast pyrolysis of empty fruit bunch in the fluidized bed reactor. A global kinetic model used to predict the product yields from fast pyrolysis of empty fruit bunch. The reaction temperature and vapor residence time parameters are mainly affected by product yields of EFB pyrolysis. The reaction temperature and vapor residence time parameters effects on empty fruit bunch pyrolysis are considered at the reaction temperature in the range of 450-500˚C and at a vapor residence time of 2 s, respectively. The optimum simulated bio-oil yield of 53 wt.% obtained at the reaction temperature and vapor residence time of 450˚C and 2 s, 500˚C and 1 s, respectively. The simulated data are in good agreement with the reported experimental data. These simulated data can be applied to the performance of experiment work for the fast pyrolysis of biomass.Keywords: kinetics, empty fruit bunch, fast pyrolysis, modeling
Procedia PDF Downloads 22321211 Nutritional Status of Morbidly Obese Patients Prior to Bariatric Surgery
Authors: Azadeh Mottaghi, Reyhaneh Yousefi, Saeed Safari
Abstract:
Background: Bariatric surgery is widely proposed as the most effective approach to mitigate the growing pace of morbid obesity. As bariatric surgery candidates suffer from pre-existing nutritional deficiencies, it is of great importance to assess nutritional status of candidates before surgery in order to establish appropriate nutritional interventions. Objectives: The present study assessed and represented baseline data according to the nutritional status among candidates for bariatric surgery. Methods: A cross-sectional analysis of pre-surgery data was collected on 170 morbidly obese patients undergoing bariatric surgery between October 2017 and February 2018. Dietary intake data (evaluated through 147-item food frequency questionnaire), anthropometric measures and biochemical parameters were assessed. Results: Participants included 145 females (25 males) with average age of 37.3 ± 10.2 years, BMI of 45.7 ± 6.4 kg/m² and reported to have a total of 72.3 ± 22.2 kg excess body weight. The most common nutritional deficiencies referred to iron, ferritin, transferrin, albumin, vitamin B12, and vitamin D, the prevalence of which in the study population were as followed; 6.5, 6.5, 3, 2, 17.6 and 66%, respectively. Mean energy, protein, fat, and carbohydrate intake were 3887.3 ± 1748.32 kcal/day, 121.6 ± 57.1, 144.1 ± 83.05, and 552.4 ± 240.5 gr/day, respectively. The study population consumed lower levels of iron, calcium, folic acid, and vitamin B12 compared to the Dietary Reference Intake (DRI) recommendations (2, 26, 2.5, and 13%, respectively). Conclusion: According to the poor dietary quality of bariatric surgery candidates, leading to nutritional deficiencies pre-operatively, close monitoring and tailored supplementation pre- and post-bariatric surgery are required.Keywords: bariatric surgery, food frequency questionnaire, obesity, nutritional status
Procedia PDF Downloads 17421210 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service
Authors: Lai Wenfang
Abstract:
Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.Keywords: artificial intelligence, natural language processing, machine learning, visualization
Procedia PDF Downloads 18121209 Object Recognition System Operating from Different Type Vehicles Using Raspberry and OpenCV
Authors: Maria Pavlova
Abstract:
In our days, it is possible to put the camera on different vehicles like quadcopter, train, airplane and etc. The camera also can be the input sensor in many different systems. That means the object recognition like non separate part of monitoring control can be key part of the most intelligent systems. The aim of this paper is to focus of the object recognition process during vehicles movement. During the vehicle’s movement the camera takes pictures from the environment without storage in Data Base. In case the camera detects a special object (for example human or animal), the system saves the picture and sends it to the work station in real time. This functionality will be very useful in emergency or security situations where is necessary to find a specific object. In another application, the camera can be mounted on crossroad where do not have many people and if one or more persons come on the road, the traffic lights became the green and they can cross the road. In this papers is presented the system has solved the aforementioned problems. It is presented architecture of the object recognition system includes the camera, Raspberry platform, GPS system, neural network, software and Data Base. The camera in the system takes the pictures. The object recognition is done in real time using the OpenCV library and Raspberry microcontroller. An additional feature of this library is the ability to display the GPS coordinates of the captured objects position. The results from this processes will be sent to remote station. So, in this case, we can know the location of the specific object. By neural network, we can learn the module to solve the problems using incoming data and to be part in bigger intelligent system. The present paper focuses on the design and integration of the image recognition like a part of smart systems.Keywords: camera, object recognition, OpenCV, Raspberry
Procedia PDF Downloads 22221208 A Study on an Evacuation Test to Measure Delay Time in Using an Evacuation Elevator
Authors: Kyungsuk Cho, Seungun Chae, Jihun Choi
Abstract:
Elevators are examined as one of evacuation methods in super-tall buildings. However, data on the use of elevators for evacuation at a fire are extremely scarce. Therefore, a test to measure delay time in using an evacuation elevator was conducted. In the test, time taken to get on and get off an elevator was measured and the case in which people gave up boarding when the capacity of the elevator was exceeded was also taken into consideration. 170 men and women participated in the test, 130 of whom were young people (20 ~ 50 years old) and 40 were senior citizens (over 60 years old). The capacity of the elevator was 25 people and it travelled between the 2nd and 4th floors. A video recording device was used to analyze the test. An elevator at an ordinary building, not a super-tall building, was used in the test to measure delay time in getting on and getting off an elevator. In order to minimize interference from other elements, elevator platforms on the 2nd and 4th floors were partitioned off. The elevator travelled between the 2nd and 4th floors where people got on and off. If less than 20 people got on the elevator which was empty, the data were excluded. If the elevator carrying 10 passengers stopped and less than 10 new passengers got on the elevator, the data were excluded. Getting-on an empty elevator was observed 49 times. The average number of passengers was 23.7, it took 14.98 seconds for the passengers to get on the empty elevator and the load factor was 1.67 N/s. It took the passengers, whose average number was 23.7, 10.84 seconds to get off the elevator and the unload factor was 2.33 N/s. When an elevator’s capacity is exceeded, the excessive number of people should get off. Time taken for it and the probability of the case were measure in the test. 37% of the times of boarding experienced excessive number of people. As the number of people who gave up boarding increased, the load factor of the ride decreased. When 1 person gave up boarding, the load factor was 1.55 N/s. The case was observed 10 times, which was 12.7% of the total. When 2 people gave up boarding, the load factor was 1.15 N/s. The case was observed 7 times, which was 8.9% of the total. When 3 people gave up boarding, the load factor was 1.26 N/s. The case was observed 4 times, which was 5.1% of the total. When 4 people gave up boarding, the load factor was 1.03 N/s. The case was observed 5 times, which was 6.3% of the total. Getting-on and getting-off time data for people who can walk freely were obtained from the test. In addition, quantitative results were obtained from the relation between the number of people giving up boarding and time taken for getting on. This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-16-02-KICT).Keywords: evacuation elevator, super tall buildings, evacuees, delay time
Procedia PDF Downloads 17821207 Investigating the Effect of Brand Equity on Competitive Advantage in the Banking Industry
Authors: Rohollah Asadian Kohestani, Nazanin Sedghi
Abstract:
As the number of banks and financial institutions working in Iran has been significantly increased, the attracting and retaining customers and encouraging them to continually use the modern banking services have been important and vital issues. Therefore, there would be a serious competition without a deep perception of consumers and fitness of banking services with their needs in the current economic conditions of Iran. It should be noted that concepts such as 'brand equity' is defined based on the view of consumers; however, it is also focused by shareholders, competitors and other beneficiaries of a firm in addition to bank and its consumers. This study examines the impact of brand equity on the competitive advantage in the banking industry as intensive competition between brands of different banks leads to pay more attention to the brands. This research is based on the Aaker’s model examining the impact of four dimensions of brand equity on the competitive advantage of private banks in Behshahr city. Moreover, conducting an applied research and data analysis has been carried out by a descriptive method. Data collection was done using literature review and questionnaire. A 'simple random' methodology was selected for sampling staff of banks while sampling methodology to select consumers of banks was the distribution of questionnaire between staff and consumers of five private banks including Tejarat, Mellat, Refah K., Ghavamin and, Tose’e Ta’avon banks. Results show that there is a significant relationship between brand equity and their competitive advantage. In this research, software of SPSS 16 and LISREL 8.5, as well as different methods of descriptive inferential statistics for analyzing data and test hypotheses, were employed.Keywords: brand awareness, brand loyalty, brand equity, competitive advantage
Procedia PDF Downloads 147