Search results for: cost benefit analysis
26174 Exploring Twitter Data on Human Rights Activism on Olympics Stage through Social Network Analysis and Mining
Authors: Teklu Urgessa, Joong Seek Lee
Abstract:
Social media is becoming the primary choice of activists to make their voices heard. This fact is coupled by two main reasons. The first reason is the emergence web 2.0, which gave the users opportunity to become content creators than passive recipients. Secondly the control of the mainstream mass media outlets by the governments and individuals with their political and economic interests. This paper aimed at exploring twitter data of network actors talking about the marathon silver medalists on Rio2016, who showed solidarity with the Oromo protesters in Ethiopia on the marathon race finish line when he won silver. The aim is to discover important insight using social network analysis and mining. The hashtag #FeyisaLelisa was used for Twitter network search. The actors’ network was visualized and analyzed. It showed the central influencers during first 10 days in August, were international media outlets while it was changed to individual activist in September. The degree distribution of the network is scale free where the frequency of degrees decay by power low. Text mining was also used to arrive at meaningful themes from tweet corpus about the event selected for analysis. The semantic network indicated important clusters of concepts (15) that provided different insight regarding the why, who, where, how of the situation related to the event. The sentiments of the words in the tweets were also analyzed and indicated that 95% of the opinions in the tweets were either positive or neutral. Overall, the finding showed that Olympic stage protest of the marathoner brought the issue of Oromo protest to the global stage. The new research framework is proposed based for event-based social network analysis and mining based on the practical procedures followed in this research for event-based social media sense making.Keywords: human rights, Olympics, social media, network analysis, social network ming
Procedia PDF Downloads 25726173 Radar Cross Section Modelling of Lossy Dielectrics
Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit
Abstract:
Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation
Procedia PDF Downloads 24026172 Challenges and Prospects of Preservation of Tangible Cultural Heritage Management: A Case Study in Lake Tana Islands, Ethiopia
Authors: Ayele Tamene
Abstract:
Cultural heritage is the legacy of physical artifacts and intangible attributes of a group or society that are inherited from past generations, maintained in the present and bestowed for the benefit of future generations. Tangible heritage e includes buildings and historic places, monuments, artifacts, etc., which are considered worthy of preservation for the future. These include objects significant to the archaeology, architecture, science or technology of a specific culture. The research addressed the challenges and prospects of preservation of tangible cultural heritage management in Lake Tana islands; Amahara Regional State. Specifically, the research inquired the major factors which affected tangible cultural heritage management, investigated how communities successfully involved in tangible cultural heritage management, and described the contribution of cultural management to tourism development. It employed qualitative research approaches to grasp the existing condition in the study area. Major techniques of data gathering such as in-depth interview, observation/photographing and Focus Group Discussion (FGDs) were used. Related documents collected through secondary sources were examined and analyzed. In Lake Tana Islands precious heritages such as ancient religious manuscripts (written since 9th century), sacral wall paintings, gold and silver Crosses, crowns and prestigious clothes of the various kings of the medieval and the 19th century are found. The study indicated that heritages in Tana islands were affected by both natural and manmade problems. In Lake Tana Islands, movable heritages were looted several times by foreign aggressors, tourists, and local people who serve there. Some heritages were affected by visitors by their camera flash light and hand touch. Most heritages in the Tana islands lacked community ownership and preserved non- professionally which highly affected their originality and authenticity. Therefore, the local community and the regional government should work together in the preservation of these heritage sites and enhance their role for socio-economic development as a center of research and tourist destinations.Keywords: cultural heritages, heritage preservation, lake Tana heritages, non professional preservation tangible heritages
Procedia PDF Downloads 32926171 Reliability Analysis of Variable Stiffness Composite Laminate Structures
Authors: A. Sohouli, A. Suleman
Abstract:
This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures
Procedia PDF Downloads 52026170 Flipped Learning in the Delivery of Structural Analysis
Authors: Ali Amin
Abstract:
This paper describes a flipped learning initiative which was trialed in the delivery of the course: structural analysis and modelling. A short series of interactive videos were developed, which introduced the key concepts of each topic. The purpose of the videos was to introduce concepts and give the students more time to develop their thoughts prior to the lecture. This allowed more time for face to face engagement during the lecture. As part of the initial study, videos were developed for half the topics covered. The videos included a short summary of the key concepts ( < 10 mins each) as well as fully worked-out examples (~30mins each). Qualitative feedback was attained from the students. On a scale from strongly disagree to strongly agree, students were rate statements such as 'The pre-class videos assisted your learning experience', 'I felt I could appreciate the content of the lecture more by watching the videos prior to class'. As a result of the pre-class engagement, the students formed more specific and targeted questions during class, and this generated greater comprehension of the material. The students also scored, on average, higher marks in questions pertaining to topics which had videos assigned to them.Keywords: flipped learning, structural analysis, pre-class videos, engineering education
Procedia PDF Downloads 9126169 Financial Investment of a Wine Cavein Greece
Authors: Stamataki Erofili Nellie, Benardos Andreas
Abstract:
Winemaking and aging in Greece has been performed so far in special facilities, designed either as above ground or shallow underground buildings. The latter are well-known in Santorini as “canaves,” dating back to the 1700s. Canaves were mainly used for wine storage and aging, although occasionally, they included a winepress to complete there the whole wine production. On the other hand, wine caves are subterranean caves of the same use as canaves in the wine manufacturing industry, but they are excavated at a much greater depth of more than 53 meters or 175 feet. Whereas canaves or a typical wine cellar is around 10 feet deep, with is equivalent to almost 3 meters. This paper discusses the advantages and the disadvantages of creating a wine cave for the vinification of a winery in Greece and the financial investment or risk that has to be taken. The data presented and analysed are given from wineries in Greece and especially from those located in Santorini island. The estimation of the cost for the excavation of the model selected as a wine cave will be compared with the financial budget of the existing premises and facilities above ground in Greek wineries. In order to show whether it is viable for a greek winery to invest in a wine cave.Keywords: underground space use, subterranean winery, wine cave, underground winery, greece
Procedia PDF Downloads 18026168 An Authentication Protocol for Quantum Enabled Mobile Devices
Authors: Natarajan Venkatachalam, Subrahmanya V. R. K. Rao, Vijay Karthikeyan Dhandapani, Swaminathan Saravanavel
Abstract:
The quantum communication technology is an evolving design which connects multiple quantum enabled devices to internet for secret communication or sensitive information exchange. In future, the number of these compact quantum enabled devices will increase immensely making them an integral part of present communication systems. Therefore, safety and security of such devices is also a major concern for us. To ensure the customer sensitive information will not be eavesdropped or deciphered, we need a strong authentications and encryption mechanism. In this paper, we propose a mutual authentication scheme between these smart quantum devices and server based on the secure exchange of information through quantum channel which gives better solutions for symmetric key exchange issues. An important part of this work is to propose a secure mutual authentication protocol over the quantum channel. We show that our approach offers robust authentication protocol and further our solution is lightweight, scalable, cost-effective with optimized computational processing overheads.Keywords: quantum cryptography, quantum key distribution, wireless quantum communication, authentication protocol, quantum enabled device, trusted third party
Procedia PDF Downloads 17426167 Fault Analysis of Induction Machine Using Finite Element Method (FEM)
Authors: Wiem Zaabi, Yemna Bensalem, Hafedh Trabelsi
Abstract:
The paper presents a finite element (FE) based efficient analysis procedure for induction machine (IM). The FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. The study based on finite element models offers much more information on the phenomena characterizing the operation of electrical machines than the classical analytical models. This explains the increase of the interest for the finite element investigations in electrical machines. Based on finite element models, this paper studies the influence of the stator and the rotor faults on the behavior of the IM. In this work, a simple dynamic model for an IM with inter-turn winding fault and a broken bar fault is presented. This fault model is used to study the IM under various fault conditions and severity. The simulation results are conducted to validate the fault model for different levels of fault severity. The comparison of the results obtained by simulation tests allowed verifying the precision of the proposed FEM model. This paper presents a technical method based on Fast Fourier Transform (FFT) analysis of stator current and electromagnetic torque to detect the faults of broken rotor bar. The technique used and the obtained results show clearly the possibility of extracting signatures to detect and locate faults.Keywords: Finite element Method (FEM), Induction motor (IM), short-circuit fault, broken rotor bar, Fast Fourier Transform (FFT) analysis
Procedia PDF Downloads 30126166 Seismic Considerations in Case Study of Kindergartens Building Design: Ensuring Safety and Structural Integrity
Authors: Al-Naqdi Ibtehal Abdulmonem
Abstract:
Kindergarten buildings are essential for early childhood education, providing a secure environment for children's development. However, they are susceptible to seismic forces, which can endanger occupants during earthquakes. This article emphasizes the importance of conducting thorough seismic analysis and implementing proper structural design to protect the well-being of children, staff, and visitors. By prioritizing structural integrity and considering functional requirements, engineers can mitigate risks associated with seismic events. The use of specialized software like ETABS is crucial for designing earthquake-resistant kindergartens. An analysis using ETABS software compared the structural performance of two single-story kindergartens in Iraq's Ministry of Education, designed with and without seismic considerations. The analysis aimed to assess the impact of seismic design on structural integrity and safety. The kindergarten was designed with seismic considerations, including moment frames. In contrast, the same kindergarten was analyzed without seismic effects, revealing a lack of structural elements to resist lateral forces, rendering it vulnerable to structural failure during an earthquake. Maximum major shear increased over 4 times and over 5 times for bending moment in both kindergartens designed with seismic considerations induced by lateral loads and seismic forces. This component of shear force is vital for designing elements to resist lateral loads and ensure structural stability.Keywords: seismic analysis, structural design, lateral loads, earthquake resistance, major shear, ETABS
Procedia PDF Downloads 6926165 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 15026164 The Impact of University League Tables on the Development of Non-Elite Universities. A Case Study of England
Authors: Lois Cheung
Abstract:
This article examines the impact of League Tables on non-elite universities in the English higher education system. The purpose of this study is to explore the use of rankings in strategic planning by low-ranked universities in this highly competitive higher education market. A sample of non-elite universities was selected for a content analysis based on the measures used by The Guardian rankings. Interestingly, these universities care about their rankings within a single national system. The content analysis appears to be an effective approach to investigating the presence of such influences. It is particularly noteworthy that all sampled universities use these measure terminologies in their strategic plans, missions and news coverage on their institutional web-pages. This analysis may be an example of the key challenges that many low-ranking universities in England are probably facing in the highly competitive and diversified higher education market. These universities use rankings to communicate with their stakeholders, mainly students, in order to fill places to secure their major source of funding. The study concludes with comments on the likely effects of the rankings paradigm in undermining the contributions of non-elite universities.Keywords: League tables, measures, post-1992 universities, ranking, strategy
Procedia PDF Downloads 18326163 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 3826162 Spatial Analysis of the Socio-Environmental Vulnerability in Medium-Sized Cities: Case Study of Municipality of Caraguatatuba SP-Brazil
Authors: Katia C. Bortoletto, Maria Isabel C. de Freitas, Rodrigo B. N. de Oliveira
Abstract:
The environmental vulnerability studies are essential for priority actions to the reduction of disasters risk. The aim of this study is to analyze the socio-environmental vulnerability obtained through a Census survey, followed by both a statistical analysis (PCA/SPSS/IBM) and a spatial analysis by GIS (ArcGis/ESRI), taking as a case study the Municipality of Caraguatatuba-SP, Brazil. In the municipal development plan analysis the emphasis was given to the Special Zone of Social Interest (ZEIS), the Urban Expansion Zone (ZEU) and the Environmental Protection Zone (ZPA). For the mapping of the social and environmental vulnerabilities of the study area the exposure of people (criticality) and of the place (support capacity) facing disaster risk were obtained from the 2010 Census from the Brazilian Institute of Geography and Statistics (IBGE). Considering the criticality, the variables of greater influence were related to literate persons responsible for the household and literate persons with 5 or more years of age; persons with 60 years or more of age and income of the person responsible for the household. In the Support Capacity analysis, the predominant influence was on the good household infrastructure in districts with low population density and also the presence of neighborhoods with little urban infrastructure and inadequate housing. The results of the comparative analysis show that the areas with high and very high vulnerability classes cover the classes of the ZEIS and the ZPA, whose zoning includes: Areas occupied by low-income population, presence of children and young people, irregular occupations and land suitable to urbanization but underutilized. The presence of zones of urban sprawl (ZEU) in areas of high to very high socio-environmental vulnerability reflects the inadequate use of the urban land in relation to the spatial distribution of the population and the territorial infrastructure, which favors the increase of disaster risk. It can be concluded that the study allowed observing the convergence between the vulnerability analysis and the classified areas in urban zoning. The occupation of areas unsuitable for housing due to its characteristics of risk was confirmed, thus concluding that the methodologies applied are agile instruments to subsidize actions to the reduction disasters risk.Keywords: socio-environmental vulnerability, urban zoning, reduction disasters risk, methodologies
Procedia PDF Downloads 29826161 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform
Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung
Abstract:
Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing
Procedia PDF Downloads 22626160 Chemical Kinetics and Computational Fluid-Dynamics Analysis of H2/CO/CO2/CH4 Syngas Combustion and NOx Formation in a Micro-Pilot-Ignited Supercharged Dual Fuel Engine
Authors: Ulugbek Azimov, Nearchos Stylianidis, Nobuyuki Kawahara, Eiji Tomita
Abstract:
A chemical kinetics and computational fluid-dynamics (CFD) analysis was performed to evaluate the combustion of syngas derived from biomass and coke-oven solid feedstock in a micro-pilot ignited supercharged dual-fuel engine under lean conditions. For this analysis, a new reduced syngas chemical kinetics mechanism was constructed and validated by comparing the ignition delay and laminar flame speed data with those obtained from experiments and other detail chemical kinetics mechanisms available in the literature. The reaction sensitivity analysis was conducted for ignition delay at elevated pressures in order to identify important chemical reactions that govern the combustion process. The chemical kinetics of NOx formation was analyzed for H2/CO/CO2/CH4 syngas mixtures by using counter flow burner and premixed laminar flame speed reactor models. The new mechanism showed a very good agreement with experimental measurements and accurately reproduced the effect of pressure, temperature and equivalence ratio on NOx formation. In order to identify the species important for NOx formation, a sensitivity analysis was conducted for pressures 4 bar, 10 bar and 16 bar and preheat temperature 300 K. The results show that the NOx formation is driven mostly by hydrogen based species while other species, such as N2, CO2 and CH4, have also important effects on combustion. Finally, the new mechanism was used in a multidimensional CFD simulation to predict the combustion of syngas in a micro-pilot-ignited supercharged dual-fuel engine and results were compared with experiments. The mechanism showed the closest prediction of the in-cylinder pressure and the rate of heat release (ROHR).Keywords: syngas, chemical kinetics mechanism, internal combustion engine, NOx formation
Procedia PDF Downloads 41026159 A Nuclear Negotiation Qualitative Case Study with Force Field Analysis
Authors: Onur Yuksel
Abstract:
In today’s complex foreign relations between countries, the nuclear enrichment and nuclear weapon have become a threat for all states in the world. There are couple isolated states which have capacity to produce nuclear weapons such as Iran and North Korea. In this article, Iran nuclear negotiation was analyzed in terms of its relations especially with The United States in order to find the important factors that affect the course of the ongoing nuclear negotiation. In this sense, the Force Field Analysis was used by determining and setting forth Driving and Restraining Forces of the nuclear negotiations in order to see the big picture and to develop strategies that may improve the long-term ongoing Iran nuclear negotiations. It is found that Iran nuclear negotiation heavily depends on breaking down the idea of Iran’s supporting terrorist organizations and being more transparent about nuclear and uranium enrichment. Also, it was found that Iran has to rebuild its relations with Western countries, especially with the United States. In addition, the counties— who contribute to Iran nuclear negotiations— will need to work on the dynamics and drivers of the Israel and Iran relations in order to peacefully transform the conflict between the two states.Keywords: driving force, Iran nuclear negotiation, restraining force, the force field analysis
Procedia PDF Downloads 15826158 Seismic Safety Evaluation of Weir Structures Using the Finite and Infinite Element Method
Authors: Ho Young Son, Bu Seog Ju, Woo Young Jung
Abstract:
This study presents the seismic safety evaluation of weir structure subjected to strong earthquake ground motions, as a flood defense structure in civil engineering structures. The seismic safety analysis procedure was illustrated through development of Finite Element (FE) and InFinite Element (IFE) method in ABAQUS platform. The IFE model was generated by CINPS4, 4-node linear one-way infinite model as a sold continuum infinite element in foundation areas of the weir structure and then nonlinear FE model using friction model for soil-structure interactions was applied in this study. In order to understand the complex behavior of weir structures, nonlinear time history analysis was carried out. Consequently, it was interesting to note that the compressive stress gave more vulnerability to the weir structure, in comparison to the tensile stress, during an earthquake. The stress concentration of the weir structure was shown at the connection area between the weir body and stilling basin area. The stress both tension and compression was reduced in IFE model rather than FE model of weir structures.Keywords: seismic, numerical analysis, FEM, weir, boundary condition
Procedia PDF Downloads 45226157 Changing New York Financial Clusters in the 2000s: Modeling the Impact and Policy Implication of the Global Financial Crisis
Authors: Silvia Lorenzo, Hongmian Gong
Abstract:
With the influx of research assessing the economic impact of the global financial crisis of 2007-8, a spatial analysis based on empirical data is needed to better understand the spatial significance of the financial crisis in New York, a key international financial center also considered the origin of the crisis. Using spatial statistics, the existence of financial clusters specializing in credit and securities throughout the New York metropolitan area are identified for 2000 and 2010, the time period before and after the height of the global financial crisis. Geographically Weighted Regressions are then used to examine processes underlying the formation and movement of financial geographies across state, county and ZIP codes of the New York metropolitan area throughout the 2000s with specific attention to tax regimes, employment, household income, technology, and transportation hubs. This analysis provides useful inputs for financial risk management and public policy initiatives aimed at addressing regional economic sustainability across state boundaries, while also developing the groundwork for further research on a spatial analysis of the global financial crisis.Keywords: financial clusters, New York, global financial crisis, geographically weighted regression
Procedia PDF Downloads 30926156 In-Vitro and Antibacterial Studies for Silicate-Phosphate Glasses Formed with Biosynthesized Silica
Authors: Damandeep Kaur, O.P. Pandey, M.S. Reddy
Abstract:
In the present research, bio-synthesisation of silica particles has been carried out successfully. For this purpose, agriculture waste rice husk (RH) has been utilized. Among several types of agriculture waste, RH is considered to be cost-effective and easily accessible. In the present investigation, a chemical approach has been followed to extract silica nanoparticles. X-Ray Diffraction (XRD) patterns indicated the amorphous nature of silica at lower temperature range. Silica and other mineral contents have been found using energy dispersive spectroscopy (EDS). Morphological and structural studies have been carried out with the use of Field Emission Scanning Electron Microscopy (FE-SEM) and Fourier Transform Infrared Transmission (FTIR) spectroscopy. Further, extracted silica from RH has been used for preparation of the glasses. The appearance of broad humps in XRD patterns confirmed the amorphous nature of prepared glasses. These glasses exhibited enhanced antibacterial effect against both Gram-positive and Gram-negative bacteria. The as-synthesized glass samples can be further used for physical and structural studies for drug loading applications.Keywords: rice husk, biosynthesized silica, bioactive glasses, antibacterial studies
Procedia PDF Downloads 11426155 Numerical Study for Structural Design of Composite Rotor with Crack Initiation
Authors: A. Chellil, A. Nour, S. Lecheb, H.Mechakra, A. Bouderba, H. Kebir
Abstract:
In this paper, the numerical study for the instability of a composite rotor is presented, under dynamic loading response in the harmonic analysis condition. The analysis of the stress which operates the rotor is done. Calculations of different energies and the virtual work of the aerodynamic loads from the rotor is developed. The use of the composite material for the rotor, offers a good Stability. Numerical calculations on the model develop of three dimensions prove that the damage effect has a negative effect on the stability of the rotor. The study of the composite rotor in transient system allowed to determine the vibratory responses due to various excitations.Keywords: rotor, composite, damage, finite element, numerical
Procedia PDF Downloads 48826154 An Analysis of Conditions for Efficiency Gains in Large ICEs Using Cycling
Authors: Bauer Peter, Murillo Jenny
Abstract:
This paper investigates the bounds of achievable fuel efficiency improvements in engines due to cycling between two operating points assuming a series hybrid configuration . It is shown that for linear bsfc dependencies (as a function of power), cycling is only beneficial if the average power needs are smaller than the power at the optimal bsfc value. Exact expressions for the fuel efficiency gains relative to the constant output power case are derived. This asymptotic analysis is then extended to the case where transient losses due to a change in the operating point are also considered. The case of the boundary bsfc trajectory where constant power application and cycling yield the same fuel consumption.is investigated. It is shown that the boundary bsfc locations of the second non-optimal operating points is hyperbolic. The analysis of the boundary case allows to evaluate whether for a particular engine, cycling can be beneficial. The introduced concepts are illustrated through a number of real world examples, i.e. large production Diesel engines in series hybrid configurations.Keywords: cycling, efficiency, bsfc, series hybrid, diesel, operating point
Procedia PDF Downloads 50426153 Pictorial Multimodal Analysis of Selected Paintings of Salvador Dali
Authors: Shaza Melies, Abeer Refky, Nihad Mansoor
Abstract:
Multimodality involves the communication between verbal and visual components in various discourses. A painting represents a form of communication between the artist and the viewer in terms of colors, shades, objects, and the title. This paper aims to present how multimodality can be used to decode the verbal and visual dimensions a painting holds. For that purpose, this study uses Kress and van Leeuwen’s theoretical framework of visual grammar for the analysis of the multimodal semiotic resources of selected paintings of Salvador Dali. This study investigates the visual decoding of the selected paintings of Salvador Dali and analyzing their social and political meanings using Kress and van Leeuwen’s framework of visual grammar. The paper attempts to answer the following questions: 1. How far can multimodality decode the verbal and non-verbal meanings of surrealistic art? 2. How can Kress and van Leeuwen’s theoretical framework of visual grammar be applied to analyze Dali’s paintings? 3. To what extent is Kress and van Leeuwen’s theoretical framework of visual grammar apt to deliver political and social messages of Dali? The paper reached the following findings: the framework’s descriptive tools (representational, interactive, and compositional meanings) can be used to analyze the paintings’ title and their visual elements. Social and political messages were delivered by appropriate usage of color, gesture, vectors, modality, and the way social actors were represented.Keywords: multimodal analysis, painting analysis, Salvador Dali, visual grammar
Procedia PDF Downloads 12226152 Intrusion Detection System Using Linear Discriminant Analysis
Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou
Abstract:
Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99
Procedia PDF Downloads 22726151 Nonuniformity of the Piston Motion in a Radial Aircraft Engine
Authors: K. Pietrykowski, M. Bialy, M. Duk
Abstract:
One of the main disadvantages of radial engines is non-uniformity of operating cycles of each cylinder. This paper discusses the results of the kinematic analysis of pistons motion of the ASz-62IR radial engine. The ASz-62IR engine is produced in Poland and mounted in the M-18 Dromader and the An-2. The results are shown as the courses of the motion of the pistons. The discrepancies in the courses for individual pistons can result in different masses of the charge to fill the cylinders. Besides, pistons acceleration of individual cylinders is different, which triggers an additional vibration in the engine.Keywords: nonuniformity, kinematic analysis, piston motion, radial engine
Procedia PDF Downloads 38526150 Electrical Equivalent Analysis of Micro Cantilever Beams for Sensing Applications
Authors: B. G. Sheeparamatti, J. S. Kadadevarmath
Abstract:
Microcantilevers are the basic MEMS devices, which can be used as sensors, actuators, and electronics can be easily built into them. The detection principle of microcantilever sensors is based on the measurement of change in cantilever deflection or change in its resonance frequency. The objective of this work is to explore the analogies between the mechanical and electrical equivalent of microcantilever beams. Normally scientists and engineers working in MEMS use expensive software like CoventorWare, IntelliSuite, ANSYS/Multiphysics, etc. This paper indicates the need of developing the electrical equivalent of the MEMS structure and with that, one can have a better insight on important parameters, and their interrelation of the MEMS structure. In this work, considering the mechanical model of the microcantilever, the equivalent electrical circuit is drawn and using a force-voltage analogy, it is analyzed with circuit simulation software. By doing so, one can gain access to a powerful set of intellectual tools that have been developed for understanding electrical circuits. Later the analysis is performed using ANSYS/Multiphysics - software based on finite element method (FEM). It is observed that both mechanical and electrical domain results for a rectangular microcantilevers are in agreement with each other.Keywords: electrical equivalent circuit analogy, FEM analysis, micro cantilevers, micro sensors
Procedia PDF Downloads 39726149 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 12526148 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano
Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das
Abstract:
Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption
Procedia PDF Downloads 41526147 All Solution-Processed Organic Light Emitting Diode with Low Melting Point Alloy Encapsulation
Authors: Geon Bae, Cheol Hee Moon
Abstract:
Organic Light Emitting Diodes (OLEDs) are being developed rapidly as next-generation displays due to their self-luminous and flexible characteristics. OLEDs are highly susceptible to moisture and oxygen due to their structural properties. Thus, requiring a high level of encapsulation technology. Recently, encapsulation technology such as Thin Film Encapsulation (TFE) has been developed for OLED, but it is not perfect to prevent moisture permeation on the side. In this study, we propose OLED encapsulation method using Low melting Point Alloy (LMPA). The LMPA line was designed in square box shape on the outer edge of the device and was formed by screen printing method. To determine if LMPA has an effect on OLED, we fabricated solution processed OLEDs with a square-shaped LMPA line and evaluate the I-V-L characteristics of the OLEDs. Also, the resistance characteristic of the LMPA line was observed by repeatedly bending the LMPA line. It is expected that LMPA encapsulation will have a great advantage in shortening the process time and cost reduction.Keywords: OLED, encapsulation, LMPA, solution process
Procedia PDF Downloads 24626146 The Relationships between the Feelings of Bullying, Self- Esteem, Employee Silence, Anger, Self- Blame and Shame
Authors: Şebnem Aslan, Demet Akarçay
Abstract:
The objective of this study is to investigate the feelings of health employees occurred by bullying and the relationships between these feelings at work place. In this context, the relationships between bullying and the feelings of self-esteem, employee silence, anger, self- blame and shame. This study was conducted among 512 health employees in three hospitals in Konya by using survey method and simple random sampling. The scales of bullying, self-esteem, employee silence, anger, self-blame, and shame were performed within the study. The obtained data were analyzed with descriptive analysis, correlation, confirmative factor analysis, structural equation modeling and path analysis. The results of the study showed that while bullying had a positive effect on self-esteem (.61), employee silence (.41), anger (.18), a negative effect on self-blame and shame (-.26) was observed. Employee silence affected self-blame and shame (.83) as positively. Besides, self-esteem impacted on self- blame and shame (.18), employee silence (.62) positively and self-blame and shame was observed as negatively affecting on anger (-.20). Similarly, self-esteem was found as negatively affected on anger (-.13).Keywords: bullying, self-esteem, employee silence, anger, shame and guilt, healthcare employee
Procedia PDF Downloads 29726145 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis
Authors: Catalina Sau Man Ng
Abstract:
Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.Keywords: latent class analysis, prevalence, survey, workplace bullying
Procedia PDF Downloads 330