Search results for: finite elements methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19581

Search results for: finite elements methods

13791 Two Houses in the Arabian Desert: Assessing the Built Work of RCR Architects in the UAE

Authors: Igor Peraza Curiel, Suzanne Strum

Abstract:

Today, when many foreign architects are receiving commissions in the United Arab Emirates, it is essential to analyze how their designs are influenced by the region's culture, environment, and building traditions. This study examines the approach to siting, geometry, construction methods, and material choices in two private homes for a family in Dubai, a project being constructed on adjacent sites by the acclaimed Spanish team of RCR Architects. Their third project in Dubai, the houses mark a turning point in their design approach to the desert. The Pritzker Prize-winning architects of RCR gained renown for building works deeply responsive to the history, landscape, and customs of their hometown in a volcanic area of the Catalonia region of Spain. Key formative projects and their entry to practice in UAE will be analyzed according to the concepts of place identity, the poetics of construction, and material imagination. The poetics of construction, a theoretical position with a long practical tradition, was revived by the British critic Kenneth Frampton. The idea of architecture as a constructional craft is related to the concepts of material imagination and place identity--phenomenological concerns with the creative engagement with local matter and topography that are at the very essence of RCR's way of designing, detailing, and making. Our study situates RCR within the challenges of building in the region, where western forms and means have largely replaced the ingenious responsiveness of indigenous architecture to the climate and material scarcity. The dwellings, iterations of the same steel and concrete vaulting system, highlight the conceptual framework of RCR's design approach to offer a study in contemporary critical regionalism. The Kama House evokes Bedouin tents, while the Alwah House takes the form of desert dunes in response to the temporality of the winds. Metal mesh screens designed to capture the shifting sands will complete the forms. The original research draws on interviews with the architects and unique documentation provided by them and collected by the authors during on-site visits. By examining the two houses in-depth, this paper foregrounds a series of timely questions: 1) What is the impact of the local climatic, cultural, and material conditions on their project in the UAE? 2) How does this work further their experiences in the region? 3) How has RCR adapted their construction techniques as their work expands beyond familiar settings? The investigation seeks to understand how the design methodology developed for more than 20 years and enmeshed in the regional milieu of their hometown can transform as the architects encounter unique characteristics and values in the Middle East. By focusing on the contemporary interpretation of Arabic geometry and elements, the houses reveal the role of geometry, tectonics, and material specificity in the realization from conceptual sketches to built form. In emphasizing the importance of regional responsiveness, the dynamics of international construction practice, and detailing this study highlights essential issues for professionals and students looking to practice in an increasingly global market.

Keywords: material imagination, regional responsiveness, place identity, poetics of construction

Procedia PDF Downloads 121
13790 Linguistic Features for Sentence Difficulty Prediction in Aspect-Based Sentiment Analysis

Authors: Adrian-Gabriel Chifu, Sebastien Fournier

Abstract:

One of the challenges of natural language understanding is to deal with the subjectivity of sentences, which may express opinions and emotions that add layers of complexity and nuance. Sentiment analysis is a field that aims to extract and analyze these subjective elements from text, and it can be applied at different levels of granularity, such as document, paragraph, sentence, or aspect. Aspect-based sentiment analysis is a well-studied topic with many available data sets and models. However, there is no clear definition of what makes a sentence difficult for aspect-based sentiment analysis. In this paper, we explore this question by conducting an experiment with three data sets: ”Laptops”, ”Restaurants”, and ”MTSC” (Multi-Target-dependent Sentiment Classification), and a merged version of these three datasets. We study the impact of domain diversity and syntactic diversity on difficulty. We use a combination of classifiers to identify the most difficult sentences and analyze their characteristics. We employ two ways of defining sentence difficulty. The first one is binary and labels a sentence as difficult if the classifiers fail to correctly predict the sentiment polarity. The second one is a six-level scale based on how many of the top five best-performing classifiers can correctly predict the sentiment polarity. We also define 9 linguistic features that, combined, aim at estimating the difficulty at sentence level.

Keywords: sentiment analysis, difficulty, classification, machine learning

Procedia PDF Downloads 67
13789 The Reason Why Al-Kashi’s Understanding of Islamic Arches Was Wrong

Authors: Amin Moradi, Maryam Moeini

Abstract:

It is a widely held view that Ghiyath al-Din Jamshid-e-Kashani, also known as al-Kashi (1380-1429 CE), was the first who played a significant role in the interaction between mathematicians and architects by introducing theoretical knowledge in Islamic architecture. In academic discourses, geometric rules extracted from his splendid volume titled as Key of Arithmetic has uncritically believed by historians of architecture to contemplate the whole process of arch design all throughout the Islamic buildings. His theories tried to solve the fundamental problem of structural design and to understand what makes an Islamic structure safe or unsafe. As a result, al-Kashi arrived at the conclusion that a safe state of equilibrium is achieved through a specific geometry as a rule. This paper reassesses the stability of al-Kashi's systematized principal forms to evaluate the logic of his hypothesis with a special focus on large spans. Besides the empirical experiences of the author in masonry constructions, the finite element approach was proposed considering the current standards in order to get a better understanding of the validity of geometric rules proposed by al-Kashi for the equilibrium conditions of Islamic masonry arches and vaults. The state of damage of his reference arches under loading condition confirms beyond any doubt that his conclusion of the geometrical configuration measured through his treaties present some serious operational limits and do not go further than some individualized mathematical hypothesis. Therefore, the nature of his mathematical studies regarding Islamic arches is in complete contradiction with the practical knowledge of construction methodology.

Keywords: Jamshid al-Kashani, Islamic architecture, Islamic geometry, construction equilibrium, collapse mechanism

Procedia PDF Downloads 113
13788 Bone Mineral Density and Quality, Body Composition of Women in the Postmenopausal Period

Authors: Vladyslav Povoroznyuk, Oksana Ivanyk, Nataliia Dzerovych

Abstract:

In the diagnostics of osteoporosis, the gold standard is considered to be bone mineral density; however, X-ray densitometry is not an accurate indicator of osteoporotic fracture risk under all circumstances. In this regard, the search for new methods that could determine the indicators not only of the mineral density, but of the bone tissue quality, is a logical step for diagnostic optimization. One of these methods is the evaluation of trabecular bone quality. The aim of this study was to examine the quality and mineral density of spine bone tissue, femoral neck, and body composition of women depending on the duration of the postmenopausal period, to determine the correlation of body fat with indicators of bone mineral density and quality. The study examined 179 women in premenopausal and postmenopausal periods. The patients were divided into the following groups: Women in the premenopausal period and women in the postmenopausal period at various stages (early, middle, late postmenopause). A general examination and study of the above parameters were conducted with General Electric X-ray densitometer. The results show that bone quality and mineral density probably deteriorate with advancing of postmenopausal period. Total fat and lean mass ratio is not likely to change with age. In the middle and late postmenopausal periods, the bone tissue mineral density of the spine and femoral neck increases along with total fat mass.

Keywords: osteoporosis, bone tissue mineral density, bone quality, fat mass, lean mass, postmenopausal osteoporosis

Procedia PDF Downloads 327
13787 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment

Procedia PDF Downloads 100
13786 Judicial Institutions in a Post-Conflict Society: Gaining Legitimacy through a Holistic Reform

Authors: Abdul Salim Amin

Abstract:

This paper focuses on how judiciaries in post-conflict society gain legitimacy through reformation. Legitimacy plays a pivotal role in shaping peoples’ behavior to submit to the law and verifies the rightfulness of an organ for taking binding decisions. Among various dynamics, judicial independence, access to justice and behavioral changes of the judicial officials broadly contribute in legitimation of judiciary in general, and the court in particular. Increasing the independence of judiciary through reform limits the interference of governmental branches in judicial issues and protects basic rights of the citizens. Judicial independence does not only matter in institutional terms, individual independence also influences the impartiality and integrity of judges, which can be increased through education and better administration of justice. Finally, access to justice as an intertwined concept both at the legal and moral spectrum of judicial reform avails justice to the citizen and increases the level of public trust and confidence. Efficient legal decisions on fostering such elements through holistic reform create a rule of law atmosphere. Citizens do not accept illegitimate judiciary and do not trust its decisions. Lack of such tolerance and confidence deters the rule of law and, thus, undermines the democratic development of a society.

Keywords: legitimacy, judicial reform, judicial independence, access to justice, legal training, informal justice, rule of law

Procedia PDF Downloads 490
13785 A Design of Elliptic Curve Cryptography Processor based on SM2 over GF(p)

Authors: Shiji Hu, Lei Li, Wanting Zhou, DaoHong Yang

Abstract:

The data encryption, is the foundation of today’s communication. On this basis, how to improve the speed of data encryption and decryption is always a problem that scholars work for. In this paper, we proposed an elliptic curve crypto processor architecture based on SM2 prime field. In terms of hardware implementation, we optimized the algorithms in different stages of the structure. In finite field modulo operation, we proposed an optimized improvement of Karatsuba-Ofman multiplication algorithm, and shorten the critical path through pipeline structure in the algorithm implementation. Based on SM2 recommended prime field, a fast modular reduction algorithm is used to reduce 512-bit wide data obtained from the multiplication unit. The radix-4 extended Euclidean algorithm was used to realize the conversion between affine coordinate system and Jacobi projective coordinate system. In the parallel scheduling of point operations on elliptic curves, we proposed a three-level parallel structure of point addition and point double based on the Jacobian projective coordinate system. Combined with the scalar multiplication algorithm, we added mutual pre-operation to the point addition and double point operation to improve the efficiency of the scalar point multiplication. The proposed ECC hardware architecture was verified and implemented on Xilinx Virtex-7 and ZYNQ-7 platforms, and each 256-bit scalar multiplication operation took 0.275ms. The performance for handling scalar multiplication is 32 times that of CPU(dual-core ARM Cortex-A9).

Keywords: Elliptic curve cryptosystems, SM2, modular multiplication, point multiplication.

Procedia PDF Downloads 82
13784 The Role of DNA Evidence in Determining Paternity in India: A Study of Cases from the Legal and Scientific Perspective

Authors: Pratyusha Das

Abstract:

A paradigm shift has been noticed in the interpretation of DNA evidence for determining paternity. Sometimes DNA evidence has been accepted while sometimes it was rejected by the Indian Courts. Courts have forwarded various justifications for acceptance and rejection of such evidence through legal and scientific means. Laws have also been changed to accommodate the necessities of society. Balances between both the legal and scientific approaches are required, to make the best possible use of DNA evidence for the well-being of the society. Specifications are to be framed as to when such evidence can be used in the future by pointing out the pros and cons. Judicial trend is to be formulated to find out the present situation. The study of cases of superior courts of India using an analytical and theoretical approach is driving the questions regarding the shared identity of the legal and scientific approaches. To assimilate the differences between the two approaches, the basic differences between them have to be formulated. Revelations are required to access the favorable decisions using the DNA evidence. Reasons are to be forwarded for the unfavorable decisions and the approach preferred in such cases. The outcome of the two methods has to be assessed in relation to the parties to the dispute, the society at large, the researcher and from the judicial point of view. The dependability of the two methods is to be studied in relation to the justice delivery system. A highlight of the chronological study of cases along with the changes in the laws with the aid of presumptions will address the questions of necessity of a method according to the facts and situations. Address is required in this respect whether the legal and scientific forces converge somewhere pushing the traditional identification of paternity towards a fundamental change.

Keywords: cases, evidence, legal, scientific

Procedia PDF Downloads 232
13783 A Transformer-Based Approach for Multi-Human 3D Pose Estimation Using Color and Depth Images

Authors: Qiang Wang, Hongyang Yu

Abstract:

Multi-human 3D pose estimation is a challenging task in computer vision, which aims to recover the 3D joint locations of multiple people from multi-view images. In contrast to traditional methods, which typically only use color (RGB) images as input, our approach utilizes both color and depth (D) information contained in RGB-D images. We also employ a transformer-based model as the backbone of our approach, which is able to capture long-range dependencies and has been shown to perform well on various sequence modeling tasks. Our method is trained and tested on the Carnegie Mellon University (CMU) Panoptic dataset, which contains a diverse set of indoor and outdoor scenes with multiple people in varying poses and clothing. We evaluate the performance of our model on the standard 3D pose estimation metrics of mean per-joint position error (MPJPE). Our results show that the transformer-based approach outperforms traditional methods and achieves competitive results on the CMU Panoptic dataset. We also perform an ablation study to understand the impact of different design choices on the overall performance of the model. In summary, our work demonstrates the effectiveness of using a transformer-based approach with RGB-D images for multi-human 3D pose estimation and has potential applications in real-world scenarios such as human-computer interaction, robotics, and augmented reality.

Keywords: multi-human 3D pose estimation, RGB-D images, transformer, 3D joint locations

Procedia PDF Downloads 66
13782 The Role of Human Capital in the Evolution of Inequality and Economic Growth in Latin-America

Authors: Luis Felipe Brito-Gaona, Emma M. Iglesias

Abstract:

There is a growing literature that studies the main determinants and drivers of inequality and economic growth in several countries, using panel data and different estimation methods (fixed effects, Generalized Methods of Moments (GMM) and Two Stages Least Squares (TSLS)). Recently, it was studied the evolution of these variables in the period 1980-2009 in the 18 countries of Latin-America and it was found that one of the main variables that explained their evolution was Foreign Direct Investment (FDI). We extend this study to the year 2015 in the same 18 countries in Latin-America, and we find that FDI does not have a significant role anymore, while we find a significant negative and positive effect of schooling levels on inequality and economic growth respectively. We also find that the point estimates associated with human capital are the largest ones of the variables included in the analysis, and this means that an increase in human capital (measured by schooling levels of secondary education) is the main determinant that can help to reduce inequality and to increase economic growth in Latin-America. Therefore, we advise that economic policies in Latin-America should be directed towards increasing the level of education. We use the methodologies of estimating by fixed effects, GMM and TSLS to check the robustness of our results. Our conclusion is the same regardless of the estimation method we choose. We also find that the international recession in the Latin-American countries in 2008 reduced significantly their economic growth.

Keywords: economic growth, human capital, inequality, Latin-America

Procedia PDF Downloads 213
13781 Spatial REE Geochemical Modeling at Lake Acıgöl, Denizli, Turkey: Analytical Approaches on Spatial Interpolation and Spatial Correlation

Authors: M. Budakoglu, M. Karaman, A. Abdelnasser, M. Kumral

Abstract:

The spatial interpolation and spatial correlation of the rare earth elements (REE) of lake surface sediments of Lake Acıgöl and its surrounding lithological units is carried out by using GIS techniques like Inverse Distance Weighted (IDW) and Geographically Weighted Regression (GWR) techniques. IDW technique which makes the spatial interpolation shows that the lithological units like Hayrettin Formation at north of Lake Acigol have high REE contents than lake sediments as well as ∑LREE and ∑HREE contents. However, Eu/Eu* values (based on chondrite-normalized REE pattern) show high value in some lake surface sediments than in lithological units and that refers to negative Eu-anomaly. Also, the spatial interpolation of the V/Cr ratio indicated that Acıgöl lithological units and lake sediments deposited in in oxic and dysoxic conditions. But, the spatial correlation is carried out by GWR technique. This technique shows high spatial correlation coefficient between ∑LREE and ∑HREE which is higher in the lithological units (Hayrettin Formation and Cameli Formation) than in the other lithological units and lake surface sediments. Also, the matching between REEs and Sc and Al refers to REE abundances of Lake Acıgöl sediments weathered from local bedrock around the lake.

Keywords: spatial geochemical modeling, IDW, GWR techniques, REE, lake sediments, Lake Acıgöl, Turkey

Procedia PDF Downloads 541
13780 Study of Side Effects of Myopia Contact Correction by Soft Lenses and Orthokeratology Lenses among Medical Students

Authors: K. Iu. Hrizhymalska, O. Ol. Andrushkova, I. Iu. Pshenychna

Abstract:

Aim. To study and copare the side effects of myopia contact correction by soft lenses and orthokeratology lenses among medical students. Patients and methods: 34 students (68 eyes) with moderate and severe myopia, who used contact correction of myopia for 2-4 years, were examined. Some of them used soft lenses, while others - orthokeratology lenses. Methods were used: biomicroscopy of the eye surface, Schirmer's test, Norn's test, survey regarding satisfaction with use. Results. Corneal vascularization along the limbus was noted in 4 (5%) eyes of the examined students. In 8 (11%) eyes, symptoms of mild dry eye disease were detected. 2 (3%) eyes showed signs of meibomitis. Allergic conjunctivitis was observed in 4 (5%) eyes, and a purulent corneal ulcer was present in 1 eye. Surveys have shown that orthokeratology lenses unlike soft lenses don't limit everyday activity (in sports, tourism, swimming etc.), they also don't cause discomfort during temperature changes and reduce existing symptoms of dry eye disease. Conclusion. Thus, myopia contact correction is one of the optimal options among students, which allows to expand physical and mental activity. However, taking into account the frequency of side effects in users of soft contact lenses, it is necessary to carry out prevention and treatment of myopia in medical students, follow the recommendations for use, instill preservative-free tear substitutes with trehalose when symptoms of dry eye appear. Also when side reactions occur, contact correction with soft lenses should be changed to orthokeratology lenses.

Keywords: correction, myopia, soft lenses, orthokeratology, specracles, cornea, dry eye, side effects, refractive errors

Procedia PDF Downloads 45
13779 Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior

Authors: Juliana A. Knocikova

Abstract:

Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.

Keywords: approximate entropy, neurophysiological data, nonlinear dynamics, reflex

Procedia PDF Downloads 292
13778 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 268
13777 Numerical Investigation of a Spiral Bladed Tidal Turbine

Authors: Mohammad Fereidoonnezhad, Seán Leen, Stephen Nash, Patrick McGarry

Abstract:

From the perspective of research innovation, the tidal energy industry is still in its early stages. While a very small number of turbines have progressed to utility-scale deployment, blade breakage is commonly reported due to the enormous hydrodynamic loading applied to devices. The aim of this study is the development of computer simulation technologies for the design of next-generation fibre-reinforced composite tidal turbines. This will require significant technical advances in the areas of tidal turbine testing and multi-scale computational modelling. The complex turbine blade profiles are designed to incorporate non-linear distributions of airfoil sections to optimize power output and self-starting capability while reducing power fluctuations. A number of candidate blade geometries are investigated, ranging from spiral geometries to parabolic geometries, with blades arranged in both cylindrical and spherical configurations on a vertical axis turbine. A combined blade element theory (BET-start-up model) is developed in MATLAB to perform computationally efficient parametric design optimisation for a range of turbine blade geometries. Finite element models are developed to identify optimal fibre-reinforced composite designs to increase blade strength and fatigue life. Advanced fluid-structure-interaction models are also carried out to compute blade deflections following design optimisation.

Keywords: tidal turbine, composite materials, fluid-structure-interaction, start-up capability

Procedia PDF Downloads 106
13776 Social Entrepreneurship against Depopulation: Network Analysis within the Theoretical Framework of the Quadruple Helix

Authors: Esperanza Garcia-Uceda, Josefina L. Murillo-Luna, M. Pilar Latorre-Martinez, Marta Ferrer-Serrano

Abstract:

Social entrepreneurship represents an innovation of traditional business models. During the last decade, its important role in contributing to rural and regional development has been widely recognized, due to its capacity to combat the problem of depopulation through the creation of employment. However, the success of this type of innovative business initiatives depends to a large extent on the existence of an adequate ecosystem of support resources. Based on the theoretical framework of the quadruple helix (QH), which highlights the need for collaboration between different interest groups -university, industry, government and civil society- for the development of regional innovations, in this work the network analysis is applied to study the ecosystem of resources to support social entrepreneurship in the rural area of the province of Zaragoza (Spain). It is a quantitative analysis that can be used to measure the interactions between the different actors that make up the quadruple helix, as well as the networks created between the different institutions and support organizations, through the study of the complex networks they form. The results show the importance of the involvement of local governments and the university, as key elements in the development process, but also allow identifying other issues that are susceptible to improvement.

Keywords: ecosystem of support resources, network analysis, quadruple helix, social entrepreneurship

Procedia PDF Downloads 237
13775 Superficial Metrology of Organometallic Chemical Vapour Deposited Undoped ZnO Thin Films on Stainless Steel and Soda-Lime Glass Substrates

Authors: Uchenna Sydney Mbamara, Bolu Olofinjana, Ezekiel Oladele B. Ajayi

Abstract:

Elaborate surface metrology of undoped ZnO thin films, deposited by organometallic chemical vapour deposition (OMCVD) technique at different precursor flow rates, was carried out. Dicarbomethyl-zinc precursor was used. The films were deposited on AISI304L steel and soda-lime glass substrates. Ultraviolet-visible-near-infrared (UV-Vis-NIR) spectroscopy showed that all the thin films were over 80% transparent, with an average bandgap of 3.39 eV, X-ray diffraction (XRD) results showed that the thin films were crystalline with a hexagonal structure, while Rutherford backscattering spectroscopy (RBS) results identified the elements present in each thin film as zinc and oxygen in the ratio of 1:1. Microscope and contactless profilometer results gave images with characteristic colours. The profilometer also gave the surface roughness data in both 2D and 3D. The asperity distribution of the thin film surfaces was Gaussian, while the average fractal dimension Da was in the range of 2.5 ≤ Da. The metrology proved the surfaces good for ‘touch electronics’ and coating mechanical parts for low friction.

Keywords: undoped ZnO, precursor flow rate, OMCVD, thin films, surface texture, tribology

Procedia PDF Downloads 53
13774 An Criterion to Minimize FE Mesh-Dependency in Concrete Plate Subjected to Impact Loading

Authors: Kwak, Hyo-Gyung, Gang, Han Gul

Abstract:

In the context of an increasing need for reliability and safety in concrete structures under blast and impact loading condition, the behavior of concrete under high strain rate condition has been an important issue. Since concrete subjected to impact loading associated with high strain rate shows quite different material behavior from that in the static state, several material models are proposed and used to describe the high strain rate behavior under blast and impact loading. In the process of modelling, in advance, mesh dependency in the used finite element (FE) is the key problem because simulation results under high strain-rate condition are quite sensitive to applied FE mesh size. It means that the accuracy of simulation results may deeply be dependent on FE mesh size in simulations. This paper introduces an improved criterion which can minimize the mesh-dependency of simulation results on the basis of the fracture energy concept, and HJC (Holmquist Johnson Cook), CSC (Continuous Surface Cap) and K&C (Karagozian & Case) models are examined to trace their relative sensitivity to the used FE mesh size. To coincide with the purpose of the penetration test with a concrete plate under a projectile (bullet), the residual velocities of projectile after penetration are compared. The correlation studies between analytical results and the parametric studies associated with them show that the variation of residual velocity with the used FE mesh size is quite reduced by applying a unique failure strain value determined according to the proposed criterion.

Keywords: high strain rate concrete, penetration simulation, failure strain, mesh-dependency, fracture energy

Procedia PDF Downloads 510
13773 Designing Modified Nanocarriers Containing Selenium Nanoparticles Extracted from the Lactobacillus acidophilus and Their Anticancer Properties

Authors: Mahnoosh Aliahmadi, Akbar Esmaeili

Abstract:

This study synthesized new modified imaging nanocapsules (NCs) of gallium@deferoxamine/folic acid/chitosan/polyaniline/polyvinyl alcohol (Ga@DFA/FA/CS/PANI/PVA) containing Morus nigra extract by selenium nanoparticles prepared from Lactobacillus acidophilus. Se nanoparticles were then deposited on (Ga@DFA/FA/CS/PANI/PVA) using the impregnation method. The modified contrast agents were mixed with M. nigra extract, and their antibacterial activities were investigated by applying them to L929 cell lines. The influence of variable factors including surfactant, solvent, aqueous phase, pH, buffer, minimum Inhibitory concentration (MIC), minimum bactericidal concentration (MBC), cytotoxicity on cancer cells, antibiotic, antibiogram, release and loading, stirring effect, the concentration of nanoparticle, olive oil, and thermotical methods was investigated. The structure and morphology of the synthesized contrast agents were characterized by zeta potential sizer analysis (ZPS), X-Ray diffraction (XRD), Fourier-transform infrared (FT-IR), and energy-dispersive X-ray (EDX), ultraviolet-visible (UV-Vis) spectra, and scanning electron microscope (SEM). The experimental section was conducted and monitored by response surface methods (RSM) and MTT conversion assay. Antibiogram testing of NCs on Pseudomonas aeruginosa bacteria was successful, and the MIC=2 factor was obtained with a less harmful effect.

Keywords: imaging contrast agent, nanoparticles, response surface method, Lactobacillus acidophilus, selenium

Procedia PDF Downloads 66
13772 Synthesis of Rare-Earth Pyrazolate Compounds

Authors: Nazli Eslamirad, Peter C. Junk, Jun Wang, Glen B. Deacon

Abstract:

Since coordination behavior of pyrazoles and pyrazolate ions are widely versatile towards a great range of metals such as d-block, f-block as well as main group elements; they attract interest as ligands for preparing compounds. A variety of rare-earth pyrazolate complexes have been synthesized by redox transmetalation/protolysis (RTP) previously, therefore, a variety of rare-earth pyrazolate complexes using two pyrazoles, 3,5-dimethylpyrazole (Me₂pzH) and 3,5-di-tert -butylpyrazolate (t-Bu₂pzH), in which the structures span the whole La-Lu array beside Sc and Y has been synthesized by RTP reaction. There have been further developments in this study: Synthesizing structure of [Tb(Me₂pz)₃(thf)]₂ which is isomorphous with those of the previously reported [Dy(Me₂pz)₃(thf)]₂ and [Lu(Me₂pz)₃(thf)]₂ analogous that has two µ-1(N):2(Nʹ)-Me2pz ligands (the most common pyrazolate ligation for non-rare-earth complexes). Previously most of the reported compounds using t-Bu2pzH were monomeric compounds however the lanthanum derivative [La(Me₂pz)₃thf₂] ,which has been reported previously without crystal structure, has now been structurally characterized, along with cerium and lutetium analogue. Also a polymeric structure with samarium has now been synthesized which the neodymium analogue has been reported previously and comparing these polymeric structures can support the idea that the geometry of Sm(tBu₂pz)₃ affect the coordination of the solvent. Also, by using 1,2-dimethoxyethane (DME) instead of tetrahydrofuran (THF) new [Er(tBu₂pz)₃ (dme)₂] has now been reported.

Keywords: lanthanoid complexes, pyrazolate, redox transmetalation/protolysis, x-ray crystal structures

Procedia PDF Downloads 201
13771 ¹⁸F-FDG PET/CT Impact on Staging of Pancreatic Cancer

Authors: Jiri Kysucan, Dusan Klos, Katherine Vomackova, Pavel Koranda, Martin Lovecek, Cestmir Neoral, Roman Havlik

Abstract:

Aim: The prognosis of patients with pancreatic cancer is poor. The median of survival after establishing diagnosis is 3-11 months without surgical treatment, 13-20 months with surgical treatment depending on the disease stage, 5-year survival is less than 5%. Radical surgical resection remains the only hope of curing the disease. Early diagnosis with valid establishment of tumor resectability is, therefore, the most important aim for patients with pancreatic cancer. The aim of the work is to evaluate the contribution and define the role of 18F-FDG PET/CT in preoperative staging. Material and Methods: In 195 patients (103 males, 92 females, median age 66,7 years, 32-88 years) with a suspect pancreatic lesion, as part of the standard preoperative staging, in addition to standard examination methods (ultrasonography, contrast spiral CT, endoscopic ultrasonography, endoscopic ultrasonographic biopsy), a hybrid 18F-FDG PET/CT was performed. All PET/CT findings were subsequently compared with standard staging (CT, EUS, EUS FNA), with peroperative findings and definitive histology in the operated patients as reference standards. Interpretation defined the extent of the tumor according to TNM classification. Limitations of resectability were local advancement (T4) and presence of distant metastases (M1). Results: PET/CT was performed in a total of 195 patients with a suspect pancreatic lesion. In 153 patients, pancreatic carcinoma was confirmed and of these patients, 72 were not indicated for radical surgical procedure due to local inoperability or generalization of the disease. The sensitivity of PET/CT in detecting the primary lesion was 92.2%, specificity was 90.5%. A false negative finding in 12 patients, a false positive finding was seen in 4 cases, positive predictive value (PPV) 97.2%, negative predictive value (NPV) 76,0%. In evaluating regional lymph nodes, sensitivity was 51.9%, specificity 58.3%, PPV 58,3%, NPV 51.9%. In detecting distant metastases, PET/CT reached a sensitivity of 82.8%, specificity was 97.8%, PPV 96.9%, NPV 87.0%. PET/CT found distant metastases in 12 patients, which were not detected by standard methods. In 15 patients (15.6%) with potentially radically resectable findings, the procedure was contraindicated based on PET/CT findings and the treatment strategy was changed. Conclusion: PET/CT is a highly sensitive and specific method useful in preoperative staging of pancreatic cancer. It improves the selection of patients for radical surgical procedures, who can benefit from it and decreases the number of incorrectly indicated operations.

Keywords: cancer, PET/CT, staging, surgery

Procedia PDF Downloads 239
13770 A Monte Carlo Fuzzy Logistic Regression Framework against Imbalance and Separation

Authors: Georgios Charizanos, Haydar Demirhan, Duygu Icen

Abstract:

Two of the most impactful issues in classical logistic regression are class imbalance and complete separation. These can result in model predictions heavily leaning towards the imbalanced class on the binary response variable or over-fitting issues. Fuzzy methodology offers key solutions for handling these problems. However, most studies propose the transformation of the binary responses into a continuous format limited within [0,1]. This is called the possibilistic approach within fuzzy logistic regression. Following this approach is more aligned with straightforward regression since a logit-link function is not utilized, and fuzzy probabilities are not generated. In contrast, we propose a method of fuzzifying binary response variables that allows for the use of the logit-link function; hence, a probabilistic fuzzy logistic regression model with the Monte Carlo method. The fuzzy probabilities are then classified by selecting a fuzzy threshold. Different combinations of fuzzy and crisp input, output, and coefficients are explored, aiming to understand which of these perform better under different conditions of imbalance and separation. We conduct numerical experiments using both synthetic and real datasets to demonstrate the performance of the fuzzy logistic regression framework against seven crisp machine learning methods. The proposed framework shows better performance irrespective of the degree of imbalance and presence of separation in the data, while the considered machine learning methods are significantly impacted.

Keywords: fuzzy logistic regression, fuzzy, logistic, machine learning

Procedia PDF Downloads 55
13769 Bioremediation of Disposed X-Ray Film for Nanoparticles Production

Authors: Essam A. Makky, Siti H. Mohd Rasdi, J. B. Al-Dabbagh, G. F. Najmuldeen

Abstract:

The synthesis of silver nano particles (SNPs) extensively studied by using chemical and physical methods. Here, the biological methods were used and give benefits in research field in the aspect of very low cost (from waste to wealth) and safe time as well. The study aims to isolate and exploit the microbial power in the production of industrially important by-products in nano-size with high economic value, to extract highly valuable materials from hazardous waste, to quantify nano particle size, and characterization of SNPs by X-Ray Diffraction (XRD) analysis. Disposal X-ray films were used as substrate because it consumes about 1000 tons of total silver chemically produced worldwide annually. This silver is being wasted when these films are used and disposed. Different bacterial isolates were obtained from various sources. Silver was extracted as nano particles by microbial power degradation from disposal X-ray film as the sole carbon source for ten days incubation period in darkness. The protein content was done and all the samples were analyzed using XRD, to characterize of silver (Ag) nano particles size in the form of silver nitrite. Bacterial isolates CL4C showed the average size of SNPs about 19.53 nm, GL7 showed average size about 52.35 nm and JF Outer 2A (PDA) showed 13.52 nm. All bacterial isolates partially identified using Gram’s reaction and the results obtained exhibited that belonging to Bacillus sp.

Keywords: nanotechnology, bioremediation, disposal X-ray film, nanoparticle, waste, XRD

Procedia PDF Downloads 474
13768 A Study on the Safety Evaluation of Pier According to the Water Level Change by the Monte-Carlo Method

Authors: Minho Kwon, Jeonghee Lim, Yeongseok Jeong, Donghoon Shin, Kiyoung Kim

Abstract:

Recently, global warming phenomenon has led to natural disasters caused by global environmental changes, and due to abnormal weather events, the frequency and intensity of heavy rain storm typhoons are increasing. Therefore, it is imperative to prepare for future heavy rain storms and typhoons. This study selects arbitrary target bridges and performs numerical analysis to evaluate the safety of bridge piers in the event that the water level changes. The numerical model is based on two-dimensional surface elements. Actual reinforced concrete was simulated by modeling concrete to include reinforcements, and a contact boundary model was applied between the ground and the concrete. The water level applied to the piers was considered at 18 levels between 7.5 m and 16.1 m. The elastic modulus, compressive strength, tensile strength, and yield strength of the reinforced concrete were calculated using 250 random combinations and numerical analysis was carried out for each water level. In the results of analysis, the bridge exceeded the stated limit at 15.0 m. At the maximum water level of 16.1m, the concrete’s failure rate was 35.2%, but the probability that the reinforcement would fail was 61.2%.

Keywords: Monte-Carlo method, pier, water level change, limit state

Procedia PDF Downloads 279
13767 Investigating the Effects of Cylinder Disablement on Diesel Engine Fuel Economy and Exhaust Temperature Management

Authors: Hasan Ustun Basaran

Abstract:

Diesel engines are widely used in transportation sector due to their high thermal efficiency. However, they also release high rates of NOₓ and PM (particulate matter) emissions into the environment which have hazardous effects on human health. Therefore, environmental protection agencies have issued strict emission regulations on automotive diesel engines. Recently, these regulations are even increasingly strengthened. Engine producers search novel on-engine methods such as advanced combustion techniques, utilization of renewable fuels, exhaust gas recirculation, advanced fuel injection methods or use exhaust after-treatment (EAT) systems in order to reduce emission rates on diesel engines. Although those aforementioned on-engine methods are effective to curb emission rates, they result in inefficiency or cannot decrease emission rates satisfactorily at all operating conditions. Therefore, engine manufacturers apply both on-engine techniques and EAT systems to meet the stringent emission norms. EAT systems are highly effective to diminish emission rates, however, they perform inefficiently at low loads due to low exhaust gas temperatures (below 250°C). Therefore, the objective of this study is to demonstrate that engine-out temperatures can be elevated above 250°C at low-loaded cases via cylinder disablement. The engine studied and modeled via Lotus Engine Simulation (LES) software is a six-cylinder turbocharged and intercooled diesel engine. Exhaust temperatures and mass flow rates are predicted at 1200 rpm engine speed and several low loaded conditions using LES program. It is seen that cylinder deactivation results in a considerable exhaust temperature rise (up to 100°C) at low loads which ensures effective EAT management. The method also improves fuel efficiency through reduced total pumping loss. Decreased total air induction due to inactive cylinders is thought to be responsible for improved engine pumping loss. The technique reduces exhaust gas flow rate as air flow is cut off on disabled cylinders. Still, heat transfer rates to the after-treatment catalyst bed do not decrease that much since exhaust temperatures are increased sufficiently. Simulation results are promising; however, further experimental studies are needed to identify the true potential of the method on fuel consumption and EAT improvement.

Keywords: cylinder disablement, diesel engines, exhaust after-treatment, exhaust temperature, fuel efficiency

Procedia PDF Downloads 166
13766 TQM Framework Using Notable Authors Comparative

Authors: Redha M. Elhuni

Abstract:

This paper presents an analysis of the essential characteristics of the TQM philosophy by comparing the work of five notable authors in the field. A framework is produced which gather the identified TQM enablers under the well-known operations management dimensions of process, business and people. These enablers are linked with sustainable development via balance scorecard type economic and non-economic measures. In order to capture a picture of Libyan Company’s efforts to implement the TQM, a questionnaire survey is designed and implemented. Results of the survey are presented showing the main differentiating factors between the sample companies, and a way of assessing the difference between the theoretical underpinning and the practitioners’ undertakings. Survey results indicate that companies are experiencing much difficulty in translating TQM theory into practice. Only a few companies have successfully adopted a holistic approach to TQM philosophy, and most of these put relatively high emphasis on hard elements compared with soft issues of TQM. However, where companies can realize the economic outputs, non- economic benefits such as workflow management, skills development and team learning are not realized. In addition, overall, non-economic measures have secured low weightings compared with the economic measures. We believe that the framework presented in this paper can help a company to concentrate its TQM implementation efforts in terms of process, system and people management dimensions.

Keywords: TQM, balance scorecard, EFQM excellence model, oil sector, Libya

Procedia PDF Downloads 391
13765 Taking Learning beyond Kirkpatrick’s Levels: Applying Return on Investment Measurement in Training

Authors: Charles L. Sigmund, M. A. Aed, Lissa Graciela Rivera Picado

Abstract:

One critical component of the training development process is the evaluation of the impact and value of the program. Oftentimes, however, learning organizations bypass this phase either because they are unfamiliar with effective methods for measuring the success or effect of the training or because they believe the effort to be too time-consuming or cumbersome. As a result, most organizations that do conduct evaluation limit their scope to Kirkpatrick L1 (reaction) and L2 (learning), or at most carry through to L4 (results). In 2021 Microsoft made a strategic decision to assess the measurable and monetized impact for all training launches and designed a scalable and program-agnostic tool for providing full-scale L5 return on investment (ROI) estimates for each. In producing this measurement tool, the learning and development organization built a framework for making business prioritizations and resource allocations that is based on the projected ROI of a course. The analysis and measurement posed by this process use a combination of training data and operational metrics to calculate the effective net benefit derived from a given training effort. Business experts in the learning field generally consider a 10% ROI to be an outstanding demonstration of the value of a project. Initial findings from this work applied to a critical customer-facing program yielded an estimated ROI of more than 49%. This information directed the organization to make a more concerted and concentrated effort in this specific line of business and resulted in additional investment in the training methods and technologies being used.

Keywords: evaluation, measurement, return on investment, value

Procedia PDF Downloads 175
13764 Giant Achievements in Food Processing

Authors: Farnaz Amidi Fazli

Abstract:

After long period of human experience about food processing from raw eating to canning of food in the last century now it is time to use novel technologies which are sometimes completely different from common technologies. It is possible to decontaminate food without using heat or the foods are stored without using cold chain. Pulsed electric field (PEF) processing is a non-thermal method of food preservation that uses short bursts of electricity, PEF can be used for processing liquid and semi-liquid food products. PEF processing offers high quality fresh-like liquid foods with excellent flavor, nutritional value, and shelf-life. High pressure processing (HPP) technology has the potential to fulfill both consumer and scientific requirements. The use of HPP for over 50 years has found applications in non-food industries. For food applications, ‘high pressure’ can be generally considered to be up to 600 MPa for most food products. After years, freezing has its high potential to food preservation due to new and quick freezing methods. Foods which are prepared by this technology have more acceptability and high quality comparing with old fashion slow freezing. Thus, quick freezing has further been adopted as a widespread commercial method for long-term preservation of perishable foods which improved both the health and convenience of everyone in the industrialised countries. Above parameters are achieved by Fluidised-bed freezing systems, freezing by immersion and Hydrofluidisation on the other hand new thawing methods like high-pressure, microwave, ohmic, and acoustic thawing have a key role in quality and adaptability of final product.

Keywords: quick freezing, thawing, high pressure, pulse electric, hydrofluidisation

Procedia PDF Downloads 310
13763 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.

Keywords: enhanced ideal gas molecular movement (EIGMM), ideal gas molecular movement (IGMM), model updating method, probability-based damage detection (PBDD), uncertainty quantification

Procedia PDF Downloads 268
13762 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis

Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi

Abstract:

Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.

Keywords: Lab-on-chip, LOC, micro-mixer, OpenFOAM, PDMS

Procedia PDF Downloads 146