Search results for: privacy preservation scheme
483 Teaching Material, Books, Publications versus the Practice: Myths and Truths about Installation and Use of Downhole Safety Valve
Authors: Robson da Cunha Santos, Caio Cezar R. Bonifacio, Diego Mureb Quesada, Gerson Gomes Cunha
Abstract:
The paper is related to the safety of oil wells and environmental preservation on the planet, because they require great attention and commitment from oil companies and people who work with these equipments. This must occur from drilling the well until it is abandoned in order to safeguard the environment and prevent possible damage. The project had as main objective the constitution resulting from comparatives made among books, articles and publications with information gathered in technical visits to operational bases of Petrobras. After the visits, the information from methods of utilization and present managements, which were not available before, became available to the general audience. As a result, it is observed a huge flux of incorrect and out-of-date information that comprehends not only bibliographic archives, but also academic resources and materials. During the gathering of more in-depth information on the manufacturing, assembling, and use aspects of DHSVs, several issues that were previously known as correct, customary issues were discovered to be uncertain and outdated. Information of great importance resulted in affirmations about subjects as the depth of the valve installation that was before installed to 30 meters from the seabed (mud line). Despite this, the installation should vary in conformity to the ideal depth to escape from area with the biggest tendency to hydrates formation according to the temperature and pressure. Regarding to valves with nitrogen chamber, in accordance with books, they have their utilization linked to water line ≥ 700 meters, but in Brazilian exploratory fields, their use occurs from 600 meters of water line. The valves used in Brazilian fields are able to be inserted to the production column and self-equalizing, but the use of screwed valve in the column of production and equalizing is predominant. Although these valves are more expensive to acquire, they are more reliable, efficient, with a bigger shelf life and they do not cause restriction to the fluid flux. It follows that based on researches and theoretical information confronted to usual forms used in fields, the present project is important and relevant. This project will be used as source of actualization and information equalization that connects academic environment and real situations in exploratory situations and also taking into consideration the enrichment of precise and easy to understand information to future researches and academic upgrading.Keywords: down hole safety valve, security devices, installation, oil-wells
Procedia PDF Downloads 270482 Temperature Distribution for Asphalt Concrete-Concrete Composite Pavement
Authors: Tetsya Sok, Seong Jae Hong, Young Kyu Kim, Seung Woo Lee
Abstract:
The temperature distribution for asphalt concrete (AC)-Concrete composite pavement is one of main influencing factor that affects to performance life of pavement. The temperature gradient in concrete slab underneath the AC layer results the critical curling stress and lead to causes de-bonding of AC-Concrete interface. These stresses, when enhanced by repetitive axial loadings, also contribute to the fatigue damage and eventual crack development within the slab. Moreover, the temperature change within concrete slab extremely causes the slab contracts and expands that significantly induces reflective cracking in AC layer. In this paper, the numerical prediction of pavement temperature was investigated using one-dimensional finite different method (FDM) in fully explicit scheme. The numerical predicted model provides a fundamental and clear understanding of heat energy balance including incoming and outgoing thermal energies in addition to dissipated heat in the system. By using the reliable meteorological data for daily air temperature, solar radiation, wind speech and variable pavement surface properties, the predicted pavement temperature profile was validated with the field measured data. Additionally, the effects of AC thickness and daily air temperature on the temperature profile in underlying concrete were also investigated. Based on obtained results, the numerical predicted temperature of AC-Concrete composite pavement using FDM provided a good accuracy compared to field measured data and thicker AC layer significantly insulates the temperature distribution in underlying concrete slab.Keywords: asphalt concrete, finite different method (FDM), curling effect, heat transfer, solar radiation
Procedia PDF Downloads 269481 Rail Corridors between Minimal Use of Train and Unsystematic Tightening of Population: A Methodological Essay
Authors: A. Benaiche
Abstract:
In the current situation, the automobile has become the main means of locomotion. It allows traveling long distances, encouraging urban sprawl. To counteract this trend, the train is often proposed as an alternative to the car. Simultaneously, the favoring of urban development around public transport nodes such as railway stations is one of the main issues of the coordination between urban planning and transportation and the keystone of the sustainable urban development implementation. In this context, this paper focuses on the study of the spatial structuring dynamics around the railway. Specifically, it is a question of studying the demographic dynamics in rail corridors of Nantes, Angers and Le Mans (Western France) basing on the radiation of railway stations. Consequently, the methodology is concentrated on the knowledge of demographic weight and gains of these corridors, the index of urban intensity and the mobility behaviors (workers’ travels, scholars' travels, modal practices of travels). The perimeter considered to define the rail corridors includes the communes of urban area which have a railway station and communes with an access time to the railway station is less than fifteen minutes by car (time specified by the Regional Transport Scheme of Travelers). The main tools used are the statistical data from the census of population, the basis of detailed tables and databases on mobility flows. The study reveals that the population is not tightened along rail corridors and train use is minimal despite the presence of a nearby railway station. These results lead to propose guidelines to make the train, a real vector of mobility across the rail corridors.Keywords: coordination between urban planning and transportation, rail corridors, railway stations, travels
Procedia PDF Downloads 243480 Setting Uncertainty Conditions Using Singular Values for Repetitive Control in State Feedback
Authors: Muhammad A. Alsubaie, Mubarak K. H. Alhajri, Tarek S. Altowaim
Abstract:
A repetitive controller designed to accommodate periodic disturbances via state feedback is discussed. Periodic disturbances can be represented by a time delay model in a positive feedback loop acting on system output. A direct use of the small gain theorem solves the periodic disturbances problem via 1) isolating the delay model, 2) finding the overall system representation around the delay model and 3) designing a feedback controller that assures overall system stability and tracking error convergence. This paper addresses uncertainty conditions for the repetitive controller designed in state feedback in either past error feedforward or current error feedback using singular values. The uncertainty investigation is based on the overall system found and the stability condition associated with it; depending on the scheme used, to set an upper/lower limit weighting parameter. This creates a region that should not be exceeded in selecting the weighting parameter which in turns assures performance improvement against system uncertainty. Repetitive control problem can be described in lifted form. This allows the usage of singular values principle in setting the range for the weighting parameter selection. The Simulation results obtained show a tracking error convergence against dynamic system perturbation if the weighting parameter chosen is within the range obtained. Simulation results also show the advantage of weighting parameter usage compared to the case where it is omitted.Keywords: model mismatch, repetitive control, singular values, state feedback
Procedia PDF Downloads 155479 Luteolin Exhibits Anti-Diabetic Effects by Increasing Oxidative Capacity and Regulating Anti-Oxidant Metabolism
Authors: Eun-Young Kwon, Myung-Sook Choi, Su-Jung Cho, Ji-Young Choi, So Young Kim, Youngji Han
Abstract:
Overweight and obesity have been linked to a low-grade chronic inflammatory response and an increased risk of developing metabolic syndrome including insulin resistance, type 2 diabetes mellitus and certain types of cancers. Luteolin is a dietary flavonoid with anti-inflammatory, anti-oxidant, anti-cancer and anti-diabetic properties. However, little is known about the detailed mechanism associated with the effect of luteolin on inflammation-related obesity and its complications. The aim of the present study was to reveal the anti-diabetic effect of luteolin in diet-induced obesity mice using “transcriptomics” tool. Thirty-nine male C57BL/6J mice (4-week-old) were randomly divided into 3 groups and were fed normal diet, high-fat diet (HFD, 20% fat) and HFD+0.005% (w/w) luteolin for 16 weeks. Luteolin improved insulin resistance, as measured by HOMA-IR and glucose tolerance, along with preservation action of pancreatic β-cells, compared to the HFD group. Luteoiln was significantly decreased the levels of leptin and ghrelin that play a pivotal role in energy balance, and the macrophage low-grade inflammation marker sCD163 (soluble Cd antigen 163) in plasma. Activities of hepatic anti-oxidant enzymes (catalase and glutathione peroxidase) were increased, while the levels of plasma transaminase (GOT and GPT) and oxidative damage markers (hepatic mitochondria H2O2 and TBARS) were markedly decreased by luteolin supplementation. In addition, luteolin increased oxidative capacity and fatty acid utilization by presenting decrease in enzyme activities of citrate synthase, cytochrome C oxidase and β-hydroxyacyl CoA dehydrogenase and UCP3 gene expression compared to high-fat diet. Moreover, our microarray results of muscle also revealed down-regulated gene expressions associated with TCA cycle by HFD were reversed to normal level by luteolin treatment. Taken together, our results indicate that luteolin is one of bioactive components for improving insulin resistance by increasing oxidative capacity, modulating anti-oxidant metabolism and suppressing inflammatory signaling cascades in diet-induced obese mice. These results provide possible therapeutic targets for prevention and treatment of diet-induced obesity and its complications.Keywords: anti-oxidant metabolism, diabetes, luteolin, oxidative capacity
Procedia PDF Downloads 337478 Fast Robust Switching Control Scheme for PWR-Type Nuclear Power Plants
Authors: Piyush V. Surjagade, Jiamei Deng, Paul Doney, S. R. Shimjith, A. John Arul
Abstract:
In sophisticated and complex systems such as nuclear power plants, maintaining the system's stability in the presence of uncertainties and disturbances and obtaining a fast dynamic response are the most challenging problems. Thus, to ensure the satisfactory and safe operation of nuclear power plants, this work proposes a new fast, robust optimal switching control strategy for pressurized water reactor-type nuclear power plants. The proposed control strategy guarantees a substantial degree of robustness, fast dynamic response over the entire operational envelope, and optimal performance during the nominal operation of the plant. To improve the robustness, obtain a fast dynamic response, and make the system optimal, a bank of controllers is designed. Various controllers, like a baseline proportional-integral-derivative controller, an optimal linear quadratic Gaussian controller, and a robust adaptive L1 controller, are designed to perform distinct tasks in a specific situation. At any instant of time, the most suitable controller from the bank of controllers is selected using the switching logic unit that designates the controller by monitoring the health of the nuclear power plant or transients. The proposed switching control strategy optimizes the overall performance and increases operational safety and efficiency. Simulation studies have been performed considering various uncertainties and disturbances that demonstrate the applicability and effectiveness of the proposed switching control strategy over some conventional control techniques.Keywords: switching control, robust control, optimal control, nuclear power control
Procedia PDF Downloads 134477 Spatial Analysis of Flood Vulnerability in Highly Urbanized Area: A Case Study in Taipei City
Authors: Liang Weichien
Abstract:
Without adequate information and mitigation plan for natural disaster, the risk to urban populated areas will increase in the future as populations grow, especially in Taiwan. Taiwan is recognized as the world's high-risk areas, where an average of 5.7 times of floods occur per year should seek to strengthen coherence and consensus in how cities can plan for flood and climate change. Therefore, this study aims at understanding the vulnerability to flooding in Taipei city, Taiwan, by creating indicators and calculating the vulnerability of each study units. The indicators were grouped into sensitivity and adaptive capacity based on the definition of vulnerability of Intergovernmental Panel on Climate Change. The indicators were weighted by using Principal Component Analysis. However, current researches were based on the assumption that the composition and influence of the indicators were the same in different areas. This disregarded spatial correlation that might result in inaccurate explanation on local vulnerability. The study used Geographically Weighted Principal Component Analysis by adding geographic weighting matrix as weighting to get the different main flood impact characteristic in different areas. Cross Validation Method and Akaike Information Criterion were used to decide bandwidth and Gaussian Pattern as the bandwidth weight scheme. The ultimate outcome can be used for the reduction of damage potential by integrating the outputs into local mitigation plan and urban planning.Keywords: flood vulnerability, geographically weighted principal components analysis, GWPCA, highly urbanized area, spatial correlation
Procedia PDF Downloads 286476 The Grand Egyptian Museum as a Cultural Interface
Authors: Mahmoud Moawad Mohamed Osman
Abstract:
The Egyptian civilization was and still is an inspiration for many human civilizations and modern sciences. For this reason, there is still a passion for the ancient Egyptian civilization. Due to the breadth and abundance of the outputs of the ancient Egyptian civilization, many museums have been established that contribute to displaying and demonstrating the splendor of the ancient Egyptian civilization, and among those museums is the Grand Egyptian Museum (Egypt's gift to the whole world). The idea of establishing the Grand Egyptian Museum began in the nineties of the last century, and in 2002 the foundation stone was laid for the museum project to be built in a privileged location overlooking the eternal pyramids of Giza, where the Egyptian state was declared, and under the auspices of the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the International Union of Architects. , for an international architectural competition for the best design for the museum. The current design submitted by Heneghan Peng Architects in Ireland won, and its design was based on the rays of the sun extending from the tops of the three pyramids when they meet to represent a conical mass, which is the Grand Egyptian Museum. The construction of the museum project began in May 2005, when the site was paved and prepared, and in 2006, the largest antiquities restoration center in the Middle East was established, dedicated to the restoration, preservation, maintenance and rehabilitation of the antiquities scheduled to be displayed in the museum halls, which was opened in 2010. The construction of the museum building, which has an area of more than 300,000 square meters, was completed during the year 2021, and includes a number of exhibition halls, each of which is considered larger than many current museums in Egypt and the world. The museum is considered one of the most important and greatest achievements of modern Egypt. It was created to be an integrated global civilizational, cultural and entertainment edifice, and to be the first destination for everyone interested in ancient Egyptian heritage, as the largest museum in the world that tells the story of the history of ancient Egyptian civilization, as it contains a large number of distinctive and unique artifacts, including the treasures of the golden king Tutankhamun, which... It is displayed for the first time in its entirety since the discovery of his tomb in November 1922, in addition to the collection of Queen Hetepheres, the guard of the mother of King Khufu, the builder of the Great Pyramid in Giza, as well as the Museum of King Khufu’s Boats, as well as various archaeological collectibles from the pre-dynastic era until the Greek and Roman eras.Keywords: grand egyptian museum, egyptian civilization, education, museology
Procedia PDF Downloads 45475 Chaotic Electronic System with Lambda Diode
Authors: George Mahalu
Abstract:
The Chua diode has been configured over time in various ways, using electronic structures like operational amplifiers (AOs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paperwork proposed here uses in the modeling a lambda diode type configuration consisting of two junction field effect transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.Keywords: chua, diode, memristor, chaos
Procedia PDF Downloads 88474 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 77473 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction
Authors: Radul Shishkov, Orlin Davchev
Abstract:
The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction
Procedia PDF Downloads 63472 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 144471 Computational Fluid Dynamics Modeling of Liquefaction of Wood and It's Model Components Using a Modified Multistage Shrinking-Core Model
Authors: K. G. R. M. Jayathilake, S. Rudra
Abstract:
Wood degradation in hot compressed water is modeled with a Computational Fluid Dynamics (CFD) code using cellulose, xylan, and lignin as model compounds. Model compounds are reacted under catalyst-free conditions in a temperature range from 250 to 370 °C. Using a simplified reaction scheme where water soluble products, methanol soluble products, char like compounds and gas are generated through intermediates with each model compound. A modified multistage shrinking core model is developed to simulate particle degradation. In the modified shrinking core model, each model compound is hydrolyzed in separate stages. Cellulose is decomposed to glucose/oligomers before producing degradation products. Xylan is decomposed through xylose and then to degradation products where lignin is decomposed into soluble products before producing the total guaiacol, organic carbon (TOC) and then char and gas. Hydrolysis of each model compound is used as the main reaction of the process. Diffusion of water monomers to the particle surface to initiate hydrolysis and dissolution of the products in water is given importance during the modeling process. In the developed model the temperature variation depends on the Arrhenius relationship. Kinetic parameters from the literature are used for the mathematical model. Meanwhile, limited initial fast reaction kinetic data limit the development of more accurate CFD models. Liquefaction results of the CFD model are analyzed and validated using the experimental data available in the literature where it shows reasonable agreement.Keywords: computational fluid dynamics, liquefaction, shrinking-core, wood
Procedia PDF Downloads 125470 Social Security Reform and Management: The Case of Three Member Territories of the Organisation of Eastern Caribbean States
Authors: Cleopatra Gittens
Abstract:
It has been recognized that some social security and national insurance systems in the Eastern Caribbean are experiencing ageing populations and economic and other crises that will present a financial challenge of being unable to pay pension benefits in fifteen to twenty years. This has implications for the fiscal and economic positions of the countries themselves. Hence, organizations would need to address the issue urgently. The study adds to the body of knowledge on social security systems and social security reforms in small island developing states (SIDS). It also makes recommendations for the types of reforms that social security systems in other SIDS can implement given their special circumstances. Secondary research is used to gather financial and other related information on three social security schemes in the Eastern Caribbean. Actuarial and financial reports and other documents of the social security systems are analysed to obtain financial and static data on each of the schemes. The findings show that the three schemes studied are experiencing steady increases in benefit expenditure versus contributions and increasing pensioner to insured ratios. The schemes will deplete their reserves between 2038 and 2050. Two of the schemes have increased their retirement age while the other has not embarked on any reforms. One scheme has made changes to its contribution percentages. Due to their small size, small populations and other unique circumstances, the social security schemes in the identified territories are not likely to be able to take advantage of all of the reform initiatives that the developed world embarked on when faced with similar problems. These schemes will need to make incremental changes that align with the timeframes recommended by the actuarial studies.Keywords: benefits, pension, small island developing states, social security reform
Procedia PDF Downloads 91469 Social Factors and Suicide Risk in Modern Russia
Authors: Maria Cherepanova, Svetlana Maximova
Abstract:
Background And Aims: Suicide is among ten most common causes of death of the working-age population in the world. According to the WHO forecasts, by 2025 suicide will be the third leading cause of death, after cardiovascular diseases and cancer. In 2019, the global suicide rate in the world was 10,5 per 100,000 people. In Russia, the average figure was 11.6. However, in some depressed regions of Russia, such as Buryatia and Altai, it reaches 35.3. The aim of this study was to develop models based on the regional factors of social well-being deprivation that provoke the suicidal risk of various age groups of Russian population. We also investigated suicidal risk prevention in modern Russia, analyzed its efficacy, and developed recommendations for suicidal risk prevention improvement. Methods: In this study, we analyzed the data from sociological surveys from six regions of Russia. Totally we interviewed 4200 people, the age of the respondents was from 16 to 70 years. The results were subjected to factorial and regression analyzes. Results: The results of our study indicate that young people are especially socially vulnerable, which result in ineffective patterns of self-preservation behavior and increase the risk of suicide. That is due to lack of anti-suicidal barriers formation; low importance of vital values; the difficulty or impossibility to achieve basic needs; low satisfaction with family and professional life; and decrease in personal unconditional significance. The suicidal risk of the middle-aged population is due to a decrease in social well-being in the main aspects of life, which determines low satisfaction, decrease in ontological security, and the prevalence of auto-aggressive deviations. The suicidal risk of the elderly population is due to increased factors of social exclusion which result in narrowing the social space and limiting the richness of life. Conclusions: The existing system for lowering suicide risk in modern Russia is predominantly oriented to a medical treatment, which provides only intervention to people who already committed suicide, that significantly limits its preventive effectiveness and social control of this deviation. The national strategy for suicide risk reduction in modern Russian society should combine medical and social activities, designed to minimize possible situations resulting to suicide. The strategy for elimination of suicidal risk should include a systematic and significant improvement of the social well-being of the population and aim at overcoming the basic aspects of social disadvantages such as poverty, unemployment as well as implementing innovative mental health improvement, developing life-saving behavior that will help to counter suicides in Russia.Keywords: social factors, suicide, prevention, Russia
Procedia PDF Downloads 167468 Opto-Electronic Properties and Structural Phase Transition of Filled-Tetrahedral NaZnAs
Authors: R. Khenata, T. Djied, R. Ahmed, H. Baltache, S. Bin-Omran, A. Bouhemadou
Abstract:
We predict structural, phase transition as well as opto-electronic properties of the filled-tetrahedral (Nowotny-Juza) NaZnAs compound in this study. Calculations are carried out by employing the full potential (FP) linearized augmented plane wave (LAPW) plus local orbitals (lo) scheme developed within the structure of density functional theory (DFT). Exchange-correlation energy/potential (EXC/VXC) functional is treated using Perdew-Burke and Ernzerhof (PBE) parameterization for generalized gradient approximation (GGA). In addition to Trans-Blaha (TB) modified Becke-Johnson (mBJ) potential is incorporated to get better precision for optoelectronic properties. Geometry optimization is carried out to obtain the reliable results of the total energy as well as other structural parameters for each phase of NaZnAs compound. Order of the structural transitions as a function of pressure is found as: Cu2Sb type → β → α phase in our study. Our calculated electronic energy band structures for all structural phases at the level of PBE-GGA as well as mBJ potential point out; NaZnAs compound is a direct (Γ–Γ) band gap semiconductor material. However, as compared to PBE-GGA, mBJ potential approximation reproduces higher values of fundamental band gap. Regarding the optical properties, calculations of real and imaginary parts of the dielectric function, refractive index, reflectivity coefficient, absorption coefficient and energy loss-function spectra are performed over a photon energy ranging from 0.0 to 30.0 eV by polarizing incident radiation in parallel to both [100] and [001] crystalline directions.Keywords: NaZnAs, FP-LAPW+lo, structural properties, phase transition, electronic band-structure, optical properties
Procedia PDF Downloads 435467 Data-Driven Strategies for Enhancing Food Security in Vulnerable Regions: A Multi-Dimensional Analysis of Crop Yield Predictions, Supply Chain Optimization, and Food Distribution Networks
Authors: Sulemana Ibrahim
Abstract:
Food security remains a paramount global challenge, with vulnerable regions grappling with issues of hunger and malnutrition. This study embarks on a comprehensive exploration of data-driven strategies aimed at ameliorating food security in such regions. Our research employs a multifaceted approach, integrating data analytics to predict crop yields, optimizing supply chains, and enhancing food distribution networks. The study unfolds as a multi-dimensional analysis, commencing with the development of robust machine learning models harnessing remote sensing data, historical crop yield records, and meteorological data to foresee crop yields. These predictive models, underpinned by convolutional and recurrent neural networks, furnish critical insights into anticipated harvests, empowering proactive measures to confront food insecurity. Subsequently, the research scrutinizes supply chain optimization to address food security challenges, capitalizing on linear programming and network optimization techniques. These strategies intend to mitigate loss and wastage while streamlining the distribution of agricultural produce from field to fork. In conjunction, the study investigates food distribution networks with a particular focus on network efficiency, accessibility, and equitable food resource allocation. Network analysis tools, complemented by data-driven simulation methodologies, unveil opportunities for augmenting the efficacy of these critical lifelines. This study also considers the ethical implications and privacy concerns associated with the extensive use of data in the realm of food security. The proposed methodology outlines guidelines for responsible data acquisition, storage, and usage. The ultimate aspiration of this research is to forge a nexus between data science and food security policy, bestowing actionable insights to mitigate the ordeal of food insecurity. The holistic approach converging data-driven crop yield forecasts, optimized supply chains, and improved distribution networks aspire to revitalize food security in the most vulnerable regions, elevating the quality of life for millions worldwide.Keywords: data-driven strategies, crop yield prediction, supply chain optimization, food distribution networks
Procedia PDF Downloads 62466 Research the Causes of Defects and Injuries of Reinforced Concrete and Stone Construction
Authors: Akaki Qatamidze
Abstract:
Implementation of the project will be a step forward in terms of reliability in Georgia and the improvement of the construction and the development of construction. Completion of the project is expected to result in a complete knowledge, which is expressed in concrete and stone structures of assessing the technical condition of the processing. This method is based on a detailed examination of the structure, in order to establish the injuries and the elimination of the possibility of changing the structural scheme of the new requirements and architectural preservationists. Reinforced concrete and stone structures research project carried out in a systematic analysis of the important approach is to optimize the process of research and development of new knowledge in the neighboring areas. In addition, the problem of physical and mathematical models of rational consent, the main pillar of the physical (in-situ) data and mathematical calculation models and physical experiments are used only for the calculation model specification and verification. Reinforced concrete and stone construction defects and failures the causes of the proposed research to enhance the effectiveness of their maximum automation capabilities and expenditure of resources to reduce the recommended system analysis of the methodological concept-based approach, as modern science and technology major particularity of one, it will allow all family structures to be identified for the same work stages and procedures, which makes it possible to exclude subjectivity and addresses the problem of the optimal direction. It discussed the methodology of the project and to establish a major step forward in the construction trades and practical assistance to engineers, supervisors, and technical experts in the construction of the settlement of the problem.Keywords: building, reinforced concrete, expertise, stone structures
Procedia PDF Downloads 336465 Ethnic-Racial Breakdown in Psychological Research among Latinx Populations in the U.S.
Authors: Madeline Phillips, Luis Mendez
Abstract:
The 21st century has seen an increase in the amount and variety of psychological research on Latinx, the largest minority group in the U.S., with great variability from the individual’s cultural origin (e.g., ethnicity) to region (e.g., nationality). We were interested in exploring how scientists recruit, conduct and report research on Latinx samples. Ethnicity and race are important components of individuals and should be addressed to capture a broader and deeper understanding of psychological research findings. In order to explore Latinx/Hispanic work, the Journal of Latinx Psychology (JLP) and Hispanic Journal of Behavioral Sciences (HJBS) were analyzed for 1) measures of ethnicity and race in empirical studies 2) nationalities represented 3) how researchers reported ethnic-racial demographics. The analysis included publications from 2013-2018 and revealed two common themes of reporting ethnicity and race: overrepresentation/underrepresentation and overgeneralization. There is currently not a systematic way of reporting ethnicity and race among Latinx/Hispanic research, creating a vague sense of what and how ethnicity/race plays a role in the lives of participants. Second, studies used the Hispanic/Latinx terms interchangeably and are not consistent across publications. For the purpose of this project, we were only interested in publications with Latinx samples in the U.S. Therefore, studies outside of the U.S. and non-empirical studies were excluded. JLP went from N = 118 articles to N = 94 and HJBS went from N = 174 to N = 154. For this project, we developed a coding rubric for ethnicity/race that reflected the different ways researchers reported ethnicity and race and was compatible with the U.S. census. We coded which ethnicity/race was identified as the largest ethnic group in each sample. We used the ethnic-racial breakdown numbers or percentages if provided. There were also studies that simply did not report the ethnic composition besides Hispanic or Latinx. We found that in 80% of the samples, Mexicans are overrepresented compared to the population statistics of Latinx in the US. We observed all the ethnic-racial breakdowns, demonstrating the overrepresentation of Mexican samples and underrepresentation and/or lack of representation of certain ethnicities (e.g., Chilean, Guatemalan). Our results showed an overgeneralization of studies that cluster their participants to Latinx/Hispanic, 23 for JLP and 63 for HJBS. The authors discuss the importance of transparency from researchers in reporting the context of the sample, including country, state, neighborhood, and demographic variables that are relevant to the goals of the project, except when there may be an issue of privacy and/or confidentiality involved. In addition, the authors discuss the importance to recognize the variability within the Latinx population and how it is reflected in the scientific discourse.Keywords: Latinx, Hispanic, race and ethnicity, diversity
Procedia PDF Downloads 114464 Study on the Effects of Indigenous Biological Face Treatment
Authors: Saron Adisu Gezahegn
Abstract:
Commercial cosmetic has been affecting human health due to their contents and dosage composition. Chemical base cosmetics exposes users to unnecessary health problems and financial cost. Some of the cosmetics' interaction with the environment has negative impacts on health such as burning, cracking, coloring, and so on. The users are looking for a temporary service without evaluating the side effects of cosmetics that contain chemical compositions that result in irritation, burning, allergies, cracking, and the nature of the face. Every cosmetic contains a heavy metal such as lead, zinc, cadmium, silicon, and other heavy cosmetics materials. The users may expose at the end of the day to untreatable diseases like cancer. The objective of the research is to study the effects of indigenous biological face treatment without any additives like chemicals. In ancient times this thought was highly tremendous in the world but things were changing bit by bit and reached chemical base cosmetics to maintain the beauty of hair, skin, and faces. The side effects of the treatment on the face were minimum and the side effects with the interaction of the environment were almost nil. But this thought is changed and replaces the indigenous substances with chemical substances by adding additives like heavy chemical lead and cadmium in the sense of preservation, pigments, dye, and shining. Various studies indicated that cosmetics have dangerous side effects that expose users to health problems and expensive financial loss. This study focuses on a local indigenous plant called Kulkual. Kulkual is available everywhere in a study area and sustainable products can harvest to use as indigenous face treatment materials.25 men and 25 women were selected as a sample population randomly to conduct the study effectively.The plant is harvested from the guard in the productive season. The plant was exposed to the sun dry for a week. Then the peel was removed from the plant fruit and the peels were taken to a bath filled with water to soak for three days. Then the flesh of the peel was avoided from the fruit and ready to use as a face treatment. The fleshy peel was smeared on each sample for almost a week and continued for a week. The result indicated that the effects of the treatment were a positive response with minimum cost and minimum side effects due to the environment. The beauty shines, smoothness, and color are better than chemical base cosmetics. Finally, the study is recommended that all users prefer a biological method of treatment with minimum cost and minimums side effects on health with the interaction of the environment.Keywords: cosmetic, indigneous, heavymetals, toxic
Procedia PDF Downloads 108463 Numerical Investigation of Pressure Drop in Core Annular Horizontal Pipe Flow
Authors: John Abish, Bibin John
Abstract:
Liquid-liquid flow in horizontal pipe is investigated in order to reveal the flow patterns arising from the co-existed flow of oil and water. The main focus of the study is to identify the feasibility of reducing the pumping power requirements of petroleum transportation lines by having an annular flow of water around the thick oil core. This idea makes oil transportation cheaper and easier. The present study uses computational fluid dynamics techniques to model oil-water flows with liquids of similar density and varying viscosity. The simulation of the flow is conducted using commercial package Ansys Fluent. Flow domain modeling and grid generation accomplished through ICEM CFD. The horizontal pipe is modeled with two different inlets and meshed with O-Grid mesh. The standard k-ε turbulence scheme along with the volume of fluid (VOF) multiphase modeling method is used to simulate the oil-water flow. Transient flow simulations carried out for a total period of 30s showed significant reduction in pressure drop while employing core annular flow concept. This study also reveals the effect of viscosity ratio, mass flow rates of individual fluids and ration of superficial velocities on the pressure drop across the pipe length. Contours of velocity and volume fractions are employed along with pressure predictions to assess the effectiveness of this proposed concept quantitatively as well as qualitatively. The outcome of the present study is found to be very relevant for the petrochemical industries.Keywords: computational fluid dynamics, core-annular flows, frictional flow resistance, oil transportation, pressure drop
Procedia PDF Downloads 405462 Design and Analysis of a Piezoelectric Linear Motor Based on Rigid Clamping
Authors: Chao Yi, Cunyue Lu, Lingwei Quan
Abstract:
Piezoelectric linear motors have the characteristics of great electromagnetic compatibility, high positioning accuracy, compact structure and no deceleration mechanism, which make it promising to applicate in micro-miniature precision drive systems. However, most piezoelectric motors are employed by flexible clamping, which has insufficient rigidity and is difficult to use in rapid positioning. Another problem is that this clamping method seriously affects the vibration efficiency of the vibrating unit. In order to solve these problems, this paper proposes a piezoelectric stack linear motor based on double-end rigid clamping. First, a piezoelectric linear motor with a length of only 35.5 mm is designed. This motor is mainly composed of a motor stator, a driving foot, a ceramic friction strip, a linear guide, a pre-tightening mechanism and a base. This structure is much simpler and smaller than most similar motors, and it is easy to assemble as well as to realize precise control. In addition, the properties of piezoelectric stack are reviewed and in order to obtain the elliptic motion trajectory of the driving head, a driving scheme of the longitudinal-shear composite stack is innovatively proposed. Finally, impedance analysis and speed performance testing were performed on the piezoelectric linear motor prototype. The motor can measure speed up to 25.5 mm/s under the excitation of signal voltage of 120 V and frequency of 390 Hz. The result shows that the proposed piezoelectric stacked linear motor obtains great performance. It can run smoothly in a large speed range, which is suitable for various precision control in medical images, aerospace, precision machinery and many other fields.Keywords: piezoelectric stack, linear motor, rigid clamping, elliptical trajectory
Procedia PDF Downloads 153461 Modeling and Simulation of Secondary Breakup and Its Influence on Fuel Spray in High Torque Low Speed Diesel Engine
Authors: Mohsin Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi
Abstract:
High torque low-speed diesel engine has a wide range of industrial and commercial applications. In literature, it’s found that lot of work has been done for the high-speed diesel engine and research on High Torque low-speed is rare. The fuel injection plays a key role in the efficiency of engine and reduction in exhaust emission. The fuel breakup plays a critical role in air-fuel mixture and spray combustion. The current study explains numerically an important phenomenon in spray combustion which is deformation and breakup of liquid drops in compression ignition internal combustion engine. The secondary breakup and its influence on spray and characteristics of compressed gas in-cylinder have been calculated by using simulation software in the backdrop of high torque low-speed diesel like conditions. The secondary spray breakup is modeled with KH - RT instabilities. The continuous field is described by turbulence model and dynamics of the dispersed droplet is modeled by Lagrangian tracking scheme. The results by using KH - RT model are compared against other default methods in OpenFOAM and published experimental data from research and implemented in CFD (Computational Fluid Dynamics). These numerical simulation, done in OpenFoam and Matlab, results are analyzed for the complete 720- degree 4 stroke engine cycle at a low engine speed, for favorable agreement to be achieved. Results thus obtained will be analyzed for better evaporation in near nozzle region. The proposed analyses will further help in better engine efficiency, low emission and improved fuel economy.Keywords: diesel fuel, KH-RT, Lagrangian , Open FOAM, secondary breakup
Procedia PDF Downloads 265460 Tasting Terroir: A Gourmet Adventure in Food and Wine Tourism
Authors: Sunita Boro, Saurabh Kumar Dixit
Abstract:
Terroir, an intricate fusion of geography, climate, soil, and human expertise, has long been acknowledged as a defining factor in the character of wines and foods. This research embarks on an exploration of terroir's profound influence on gastronomic tourism, shedding light on the intricate interplay between the physical environment and culinary artistry. Delving into the intricate science of terroir, we scrutinize its role in shaping the sensory profiles of wines and foods, emphasizing the profound impact of specific regions on flavor, aroma, and texture. We deploy a multifaceted methodology, amalgamating sensory analysis, chemical profiling, geographical information systems, and qualitative interviews to unearth the nuanced nuances of terroir expression. Through an exhaustive review of the literature, we elucidate the historical roots of terroir, unveil the intricate cultural dimensions shaping it, and provide a comprehensive examination of prior studies in the field. Our findings underscore the pivotal role of terroir in promoting regional identities, enhancing the economic viability of locales, and attracting gastronomic tourists. The paper also dissects the marketing strategies employed to promote terroir-driven food and wine experiences. We elucidate the utilization of storytelling, branding, and collaborative endeavors in fostering a robust terroir-based tourism industry. This elucidates both the potential for innovation and the challenges posed by oversimplification or misrepresentation of terroir. Our research spotlights the intersection of terroir and sustainability, emphasizing the significance of environmentally conscious practices in terroir-driven productions. We discern the harmonious relationship between sustainable agriculture, terroir preservation, and responsible tourism, encapsulating the essence of ecological integrity in gastronomic tourism. Incorporating compelling case studies of regions and businesses excelling in the terroir-based tourism realm, we offer in-depth insights into successful models and strategies, with an emphasis on their replicability and adaptability to various contexts. Ultimately, this paper not only contributes to the scholarly understanding of terroir's role in the world of food and wine tourism but also provides actionable recommendations for stakeholders to leverage terroir's allure, preserve its authenticity, and foster sustainable and enriching culinary tourism experiences.Keywords: terroir, food tourism, wine tourism, sustainability
Procedia PDF Downloads 60459 Cultural and Natural Heritage Conservation by GIS Tourism Inventory System Project
Authors: Gamze Safak, Umut Arslanoglu
Abstract:
Cultural and tourism conservation and development zones and tourism centers are the boundaries declared for the purpose of protecting, using, and evaluating the sectoral development and planned development in areas where historical and cultural values are heavily involved and/or where tourism potential is high. The most rapidly changing regions in Turkey are tourism areas, especially the coastal areas. Planning these regions is not about only an economic gain but also a natural and physical environment and refers to a complex process. If the tourism sector is not well controlled, excessive use of natural resources and wrong location choices may cause damage to natural areas, historical values, and socio-cultural structure. Since the strategic decisions taken in the environmental order and zoning plans, which are the means of guiding the physical environment of the Ministry of Culture and Tourism, which have the authority to make plans in tourism centers, are transformed into plan decisions that find the spatial expression, comprehensive evaluation of all kinds of data, following the historical development and based on the correct and current data is required. In addition, the authority has a number of competences in tourism promotion as well as the authority to plan, leading to the necessity of taking part in the applications requiring complex analysis such as the management and integration of the country's economic, political, social and cultural resources. For this purpose, Tourism Inventory System (TES) project, which consists of a series of subsystems, has been developed in order to solve complex planning and method problems in the management of site-related information. The scope of the project is based on the integration of numerical and verbal data in the regions within the jurisdiction of the authority, and the monitoring of the historical development of urban planning studies, making the spatial data of the institution easily accessible, shared, questionable and traceable in international standards. A dynamic and continuous system design has been put into practice by utilizing the advantage of the use of Geographical Information Systems in the planning process to play a role in making the right decisions, revealing the tools of social, economic, cultural development, and preservation of natural and cultural values. This paper, which is prepared by the project team members in TES (Tourism Inventory System), will present a study regarding the applicability of GIS in cultural and natural heritage conservation.Keywords: cultural conservation, GIS, geographic information system, tourism inventory system, urban planning
Procedia PDF Downloads 119458 Enhancement in Bactericidal Activity of Hydantoin Based Microsphere from Smooth to Rough
Authors: Rajani Kant Rai, Jayakrishnan Athipet
Abstract:
There have been several attempts to prepare polymers with antimicrobial properties by doping with various N-halamines. Hydantoins (Cyclic N-halamine) is of importance due to their stability rechargeable chloroamide function, broad-spectrum anti-microbial action and ability to prevent resistance to the organisms. Polymerizable hydantoins are synthesized by tethering vinyl moieties to 5,5,-dialkyl hydantoin sacrificing the imide hydrogen in the molecule thereby restricting the halogen capture only to the amide nitrogen that results in compromised antibacterial activity. In order to increase the activity of the antimicrobial polymer, we have developed a scheme to maximize the attachment of chlorine to the amide and the imide moieties of hydantoin. Vinyl hydantoin monomer, (Z)-5-(4-((3-methylbuta-1,3-dien-2-yl)oxy)benzylidene)imidazolidine-2,4-dione (MBBID) was synthesized and copolymerized with a commercially available monomer, methyl methacrylate, by free radical polymerization. The antimicrobial activity of hydantoin is strongly dependent on their surface area and hence their microbial activity increases when incorporated in microspheres or nanoparticles as compared to their bulk counterpart. In this regard, smooth and rough surface microsphere of the vinyl monomer (MBBID) with commercial monomer was synthesized. The oxidative chlorine content of the copolymer ranged from 1.5 to 2.45 %. Further, to demonstrate the water purification potential, the thin column was packed with smooth or rough microspheres and challenged with simulated contaminated water that exhibited 6 log kill (total kill) of the bacteria in 20 minutes of exposure with smooth (25 mg/ml) and rough microsphere (15.0 mg/ml).Keywords: cyclic N-halamine, vinyl hydantoin monomer, rough surface microsphere, simulated contaminated water
Procedia PDF Downloads 145457 A Comparative Study between Japan and the European Union on Software Vulnerability Public Policies
Authors: Stefano Fantin
Abstract:
The present analysis outcomes from the research undertaken in the course of the European-funded project EUNITY, which targets the gaps in research and development on cybersecurity and privacy between Europe and Japan. Under these auspices, the research presents a study on the policy approach of Japan, the EU and a number of Member States of the Union with regard to the handling and discovery of software vulnerabilities, with the aim of identifying methodological differences and similarities. This research builds upon a functional comparative analysis of both public policies and legal instruments from the identified jurisdictions. The result of this analysis is based on semi-structured interviews with EUNITY partners, as well as by the participation of the researcher to a recent report from the Center for EU Policy Study on software vulnerability. The European Union presents a rather fragmented legal framework on software vulnerabilities. The presence of a number of different legislations at the EU level (including Network and Information Security Directive, Critical Infrastructure Directive, Directive on the Attacks at Information Systems and the Proposal for a Cybersecurity Act) with no clear focus on such a subject makes it difficult for both national governments and end-users (software owners, researchers and private citizens) to gain a clear understanding of the Union’s approach. Additionally, the current data protection reform package (general data protection regulation), seems to create legal uncertainty around security research. To date, at the member states level, a few efforts towards transparent practices have been made, namely by the Netherlands, France, and Latvia. This research will explain what policy approach such countries have taken. Japan has started implementing a coordinated vulnerability disclosure policy in 2004. To date, two amendments can be registered on the framework (2014 and 2017). The framework is furthermore complemented by a series of instruments allowing researchers to disclose responsibly any new discovery. However, the policy has started to lose its efficiency due to a significant increase in reports made to the authority in charge. To conclude, the research conducted reveals two asymmetric policy approaches, time-wise and content-wise. The analysis therein will, therefore, conclude with a series of policy recommendations based on the lessons learned from both regions, towards a common approach to the security of European and Japanese markets, industries and citizens.Keywords: cybersecurity, vulnerability, European Union, Japan
Procedia PDF Downloads 156456 Comparative Outcomes of Percutaneous Coronary Intervention in Smokers versus Non Nonsmokers Patients: Observational Studies
Authors: Pratima Tatke, Archana Avhad, Bhanu Duggal, Meeta Rajivlochan, Sujata Saunik, Pradip Vyas, Nidhi Pandey, Aditee Dalvi, Jyothi Subramanian
Abstract:
Background: Smoking is well established risk factor for the development and progression of coronary artery disease. It is strongly related to morbidity and mortality from cardiovascular causes. The aim of this study is to observe effect of smoking status on percutaneous coronary intervention(PCI) after 1 year. Methods: 2527 patients who underwent PCI at different hospital of Maharashtra(India) from 2012 to 2015 under the health insurance scheme which is launched by Health department, Government of Maharashtra for below poverty line(BPL) families which covers cardiology. Informed consent of patients was taken .They were followed by telephonic survey after 6months to 1year of PCI . Outcomes of interest included myocardial infarction, restenosis, cardiac rehospitalization, death, and a composite of events after PCI. Made group of two non smokers-1861 and smokers (including patients who quit at time of PCI )-659. Results: Statistical Analysis using Pearson’s chi square test revealed that there was trend seen of increasing incidence of death, Myocardial infarction and Restenosis in smokers than non smokers .Smokers had a greater death risk compared to nonsmoker; 5.7% and 5.1% respectively p=0.518. Also Repeat procedures (2.1% vs. 1.5% p=0.222), breathlessness (17.8% vs. 18.20% p=0.1) and Myocardial Infarction (7.3% vs. 10%) high in smoker than non smokers. Conclusion: Major adverse cardiovascular events (MACE) were observed even after successful PCI in smokers. Patients undergoing percutaneous coronary intervention should be encouraged to stop smoking.Keywords: coronary artery diseases, major adverse cardiovascular events, percutaneous coronary intervention, smoking
Procedia PDF Downloads 210455 Countering the Bullwhip Effect by Absorbing It Downstream in the Supply Chain
Authors: Geng Cui, Naoto Imura, Katsuhiro Nishinari, Takahiro Ezaki
Abstract:
The bullwhip effect, which refers to the amplification of demand variance as one moves up the supply chain, has been observed in various industries and extensively studied through analytic approaches. Existing methods to mitigate the bullwhip effect, such as decentralized demand information, vendor-managed inventory, and the Collaborative Planning, Forecasting, and Replenishment System, rely on the willingness and ability of supply chain participants to share their information. However, in practice, information sharing is often difficult to realize due to privacy concerns. The purpose of this study is to explore new ways to mitigate the bullwhip effect without the need for information sharing. This paper proposes a 'bullwhip absorption strategy' (BAS) to alleviate the bullwhip effect by absorbing it downstream in the supply chain. To achieve this, a two-stage supply chain system was employed, consisting of a single retailer and a single manufacturer. In each time period, the retailer receives an order generated according to an autoregressive process. Upon receiving the order, the retailer depletes the ordered amount, forecasts future demand based on past records, and places an order with the manufacturer using the order-up-to replenishment policy. The manufacturer follows a similar process. In essence, the mechanism of the model is similar to that of the beer game. The BAS is implemented at the retailer's level to counteract the bullwhip effect. This strategy requires the retailer to reduce the uncertainty in its orders, thereby absorbing the bullwhip effect downstream in the supply chain. The advantage of the BAS is that upstream participants can benefit from a reduced bullwhip effect. Although the retailer may incur additional costs, if the gain in the upstream segment can compensate for the retailer's loss, the entire supply chain will be better off. Two indicators, order variance and inventory variance, were used to quantify the bullwhip effect in relation to the strength of absorption. It was found that implementing the BAS at the retailer's level results in a reduction in both the retailer's and the manufacturer's order variances. However, when examining the impact on inventory variances, a trade-off relationship was observed. The manufacturer's inventory variance monotonically decreases with an increase in absorption strength, while the retailer's inventory variance does not always decrease as the absorption strength grows. This is especially true when the autoregression coefficient has a high value, causing the retailer's inventory variance to become a monotonically increasing function of the absorption strength. Finally, numerical simulations were conducted for verification, and the results were consistent with our theoretical analysis.Keywords: bullwhip effect, supply chain management, inventory management, demand forecasting, order-to-up policy
Procedia PDF Downloads 74454 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context
Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue
Abstract:
The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.Keywords: data analytics, smart cities, competitive advantage, internet of things
Procedia PDF Downloads 133