Search results for: innovation types
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7065

Search results for: innovation types

765 Dynamic Wetting and Solidification

Authors: Yulii D. Shikhmurzaev

Abstract:

The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.

Keywords: dynamic wetting, interface formation, phase transition, solidification

Procedia PDF Downloads 61
764 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor

Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk

Abstract:

In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.

Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift

Procedia PDF Downloads 133
763 The Sr-Nd Isotope Data of the Platreef Rocks from the Northern Limb of the Bushveld Igneous Complex: Evidence of Contrasting Magma Composition and Origin

Authors: Tshipeng Mwenze, Charles Okujeni, Abdi Siad, Russel Bailie, Dirk Frei, Marcelene Voigt, Petrus Le Roux

Abstract:

The Platreef is a platinum group element (PGE) deposit in the northern limb of the Bushveld Igneous Complex (BIC) which was emplaced as a series of mafic and ultramafic sills between the Main Zone (MZ) and the country rocks. The PGE mineralisation in the Platreef is hosted in different rock types, and its distribution and style vary with depth and along strike. This study contributes towards understanding the processes involved in the genesis of the Platreef. Twenty-four Platreef (2 harzburgites, 4 olivine pyroxenites, 17 feldspathic pyroxenites and 1 gabbronorite) and few MZ (1 gabbronorite and 1 leucogabbronorite) quarter core samples were collected from four drill cores (e.g., TN754, TN200, SS339, and OY482) and analysed for whole-rock Sr-Nd isotope data. The results show positive ɛNd values (+3.53 to +7.51) for harzburgites suggesting their parental magmas derived from the depleted Mantle. The remaining Platreef rocks have negative ɛNd values (-2.91 to -22.88) and show significant variations in Sr-Nd isotopic compositions. The first group of Platreef samples has relatively high isotopic compositions (ɛNd= -2.91 to -5.68; ⁸⁷Sr/⁸⁶Sri= 0.709177 - 0.711998). The second group of Platreef samples has Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.709816-0.712106) overlapping with samples of the first group but slightly lower ɛNd values (-7.44 to -8.39). Lastly, the third group of Platreef samples has low ɛNd values (-10.82 to -14.32) and low Sr ratios (⁸⁷Sr/⁸⁶Sri= 0.707545-0.710042) than those from samples of the two Platreef groups mentioned above. There is, however, a Platreef sample with ɛNd value (-5.26) in range with the Platreef samples of the first group, but its Sr ratio (0.707281) is the lowest even when compared to samples of the third Platreef group. There are also five other Platreef samples which have either anomalous ɛNd or Sr ratios which make it difficult to assess their isotopic compositions relative to other samples. These isotopic variations for the Platreef samples indicate both multiple sources and multiple magma chambers where varying crustal contamination styles have operated during the evolution of these magmas prior their emplacements into the Platreef setting as sills. Furthermore, the MZ rocks have different Sr-Nd isotopic compositions (For OY482 gabbronorite [ɛNd= +0.65; ⁸⁷Sr/⁸⁶Sri= 0.711746]; for TN754 leucogabbronorite [ɛNd= -7.44; ⁸⁷Sr/⁸⁶Sri= 0.709322]) which do not only indicate different MZ magma chambers, but also different magmas from those of the Platreef. Although the Platreef is still considered a single stratigraphic unit in the northern limb of the BIC, its genesis involved multiple magmatic processes which evolved independently from each other.

Keywords: crustal contamination styles, magma chambers, magma sources, multiple sills emplacement

Procedia PDF Downloads 164
762 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 186
761 Distinguishing between Bacterial and Viral Infections Based on Peripheral Human Blood Tests Using Infrared Microscopy and Multivariate Analysis

Authors: H. Agbaria, A. Salman, M. Huleihel, G. Beck, D. H. Rich, S. Mordechai, J. Kapelushnik

Abstract:

Viral and bacterial infections are responsible for variety of diseases. These infections have similar symptoms like fever, sneezing, inflammation, vomiting, diarrhea and fatigue. Thus, physicians may encounter difficulties in distinguishing between viral and bacterial infections based on these symptoms. Bacterial infections differ from viral infections in many other important respects regarding the response to various medications and the structure of the organisms. In many cases, it is difficult to know the origin of the infection. The physician orders a blood, urine test, or 'culture test' of tissue to diagnose the infection type when it is necessary. Using these methods, the time that elapses between the receipt of patient material and the presentation of the test results to the clinician is typically too long ( > 24 hours). This time is crucial in many cases for saving the life of the patient and for planning the right medical treatment. Thus, rapid identification of bacterial and viral infections in the lab is of great importance for effective treatment especially in cases of emergency. Blood was collected from 50 patients with confirmed viral infection and 50 with confirmed bacterial infection. White blood cells (WBCs) and plasma were isolated and deposited on a zinc selenide slide, dried and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. The acquired spectra of WBCs and plasma were analyzed in order to differentiate between the two types of infections. In this study, the potential of FTIR microscopy in tandem with multivariate analysis was evaluated for the identification of the agent that causes the human infection. The method was used to identify the infectious agent type as either bacterial or viral, based on an analysis of the blood components [i.e., white blood cells (WBC) and plasma] using their infrared vibrational spectra. The time required for the analysis and evaluation after obtaining the blood sample was less than one hour. In the analysis, minute spectral differences in several bands of the FTIR spectra of WBCs were observed between groups of samples with viral and bacterial infections. By employing the techniques of feature extraction with linear discriminant analysis (LDA), a sensitivity of ~92 % and a specificity of ~86 % for an infection type diagnosis was achieved. The present preliminary study suggests that FTIR spectroscopy of WBCs is a potentially feasible and efficient tool for the diagnosis of the infection type.

Keywords: viral infection, bacterial infection, linear discriminant analysis, plasma, white blood cells, infrared spectroscopy

Procedia PDF Downloads 221
760 Anatomical and Histochemical Investigation of the Leaf of Vitex agnus-castus L.

Authors: S. Mamoucha, J. Rahul, N. Christodoulakis

Abstract:

Introduction: Nature has been the source of medicinal agents since the dawn of the human existence on Earth. Currently, millions of people, in the developing world, rely on medicinal plants for primary health care, income generation and lifespan improvement. In Greece, more than 5500 plant taxa are reported while about 250 of them are considered to be of great pharmaceutical importance. Among the plants used for medical purposes, Vitex agnus-castus L. (Verbenaceae) is known since ancient times. It is a small tree or shrub, widely distributed in the Mediterranean basin up to the Central Asia. It is also known as chaste tree or monks pepper. Theophrastus mentioned the shrub several times, as ‘agnos’ in his ‘Enquiry into Plants’. Dioscorides mentioned the use of V. agnus-castus for the stimulation of lactation in nursing mothers and the treatment of several female disorders. The plant has important medicinal properties and a long tradition in folk medicine as an antimicrobial, diuretic, digestive and insecticidal agent. Materials and methods: Leaves were cleaned, detached, fixed, sectioned and investigated with light and Scanning Electron Microscopy (SEM). Histochemical tests were executed as well. Specific histochemical reagents (osmium tetroxide, H2SO4, vanillin/HCl, antimony trichloride, Wagner’ s reagent, Dittmar’ s reagent, potassium bichromate, nitroso reaction, ferric chloride and di methoxy benzaldehyde) were used for the sub cellular localization of secondary metabolites. Results: Light microscopical investigations of the elongated leaves of V. agnus-castus revealed three layers of palisade parenchyma, just below the single layered adaxial epidermis. The spongy parenchyma is rather loose. Adaxial epidermal cells are larger in magnitude, compared to those of the abaxial epidermis. Four different types of capitate, secreting trichomes, were localized among the abaxial epidermal cells. Stomata were observed at the abaxial epidermis as well. SEM revealed the interesting arrangement of trichomes. Histochemical treatment on fresh and plastic embedded tissue sections revealed the nature and the sites of secondary metabolites accumulation (flavonoids, steroids, terpenes). Acknowledgment: This work was supported by IKY - State Scholarship Foundation, Athens, Greece.

Keywords: Vitex agnus-castus, leaf anatomy, histochemical reagents, secondary metabolites

Procedia PDF Downloads 382
759 Responses of Grain Yield, Anthocyanin and Antioxidant Capacity to Water Condition in Wetland and Upland Purple Rice Genotypes

Authors: Supaporn Yamuangmorn, Chanakan Prom-U-Thai

Abstract:

Wetland and upland purple rice are the two major types classified by its original ecotypes in Northern Thailand. Wetland rice is grown under flooded condition from transplanting until the mutuality, while upland rice is naturally grown under well-drained soil known as aerobic cultivations. Both ecotypes can be grown and adapted to the reverse systems but little is known on its responses of grain yield and qualities between the 2 ecotypes. This study evaluated responses of grain yield as well as anthocyanin and antioxidant capacity between the wetland and upland purple rice genotypes grown in the submerged and aerobic conditions. A factorial arrangement in a randomized complete block design (RCBD) with two factors of rice genotype and water condition were carried out in three replications. The two wetland genotypes (Kum Doi Saket: KDK and Kum Phayao: KPY) and two upland genotypes (Kum Hom CMU: KHCMU and Pieisu1: PES1) were used in this study by growing under submerged and aerobic conditions. Grain yield was affected by the interaction between water condition and rice genotype. The wetland genotypes, KDK and KPY grown in the submerged condition produced about 2.7 and 0.8 times higher yield than in the aerobic condition, respectively. The 0.4 times higher grain yield of upland genotype (PES1) was found in the submerged condition than in the aerobic condition, but no significant differences in KHCMU. In the submerged condition, all genotypes produced higher yield components of tiller number, panicle number and percent filled grain than in the aerobic condition by 24% and 32% and 11%, respectively. The thousand grain weight and spikelet number were affected by water condition differently among genotypes. The wetland genotypes, KDK and KPY, and upland genotype, PES1, grown in the submerged condition produced about 19-22% higher grain weight than in the aerobic condition. The similar effect was found in spikelet number which the submerged condition of wetland genotypes, KDK and KPY, and the upland genotype, KHCMU, had about 28-30% higher than the aerobic condition. In contrast, the anthocyanin concentration and antioxidant capacity were affected by both the water condition and genotype. Rice grain grown in the aerobic condition had about 0.9 and 2.6 times higher anthocyanin concentration than in the submerged condition was found in the wetland rice, KDK and upland rice, KHCMU, respectively. Similarly, the antioxidant capacity of wetland rice, KDK and upland rice, KHCMU were 0.5 and 0.6 times higher in aerobic condition than in the submerged condition. There was a negative correlation between grain yield and anthocyanin concentration in wetland genotype KDK and upland genotype KHCMU, but it was not found in the other genotypes. This study indicating that some rice genotype can be adapted in the reverse ecosystem in both grain yield and quality, especially in the wetland genotype KPY and upland genotype PES1. To maximize grain yield and quality of purple rice, proper water management condition is require with a key consideration on difference responses among genotypes. Increasing number of rice genotypes in both ecotypes is needed to confirm their responses on water management.

Keywords: purple rice, water condition, anthocyanin, grain yield

Procedia PDF Downloads 157
758 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions

Authors: Mohamed Ahmed M. Azab

Abstract:

The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.

Keywords: solar energy, water desalination, reverse osmosis, arid regions

Procedia PDF Downloads 444
757 Dry Modifications of PCL/Chitosan/PCL Tissue Scaffolds

Authors: Ozan Ozkan, Hilal Turkoglu Sasmazel

Abstract:

Natural polymers are widely used in tissue engineering applications, because of their biocompatibility, biodegradability and solubility in the physiological medium. On the other hand, synthetic polymers are also widely utilized in tissue engineering applications, because they carry no risk of infectious diseases and do not cause immune system reaction. However, the disadvantages of both polymer types block their individual usages as tissue scaffolds efficiently. Therefore, the idea of usage of natural and synthetic polymers together as a single 3D hybrid scaffold which has the advantages of both and the disadvantages of none has been entered to the literature. On the other hand, even though these hybrid structures support the cell adhesion and/or proliferation, various surface modification techniques applied to the surfaces of them to create topographical changes on the surfaces and to obtain reactive functional groups required for the immobilization of biomolecules, especially on the surfaces of synthetic polymers in order to improve cell adhesion and proliferation. In a study presented here, to improve the surface functionality and topography of the layer by layer electrospun 3D poly-epsilon-caprolactone/chitosan/poly-epsilon-caprolactone hybrid tissue scaffolds by using atmospheric pressure plasma method, thus to improve cell adhesion and proliferation of these tissue scaffolds were aimed. The formation/creation of the functional hydroxyl and amine groups and topographical changes on the surfaces of scaffolds were realized by using two different atmospheric pressure plasma systems (nozzle type and dielectric barrier discharge (DBD) type) carried out under different gas medium (air, Ar+O2, Ar+N2). The plasma modification time and distance for the nozzle type plasma system as well as the plasma modification time and the gas flow rate for DBD type plasma system were optimized with monitoring the changes in surface hydrophilicity by using contact angle measurements. The topographical and chemical characterizations of these modified biomaterials’ surfaces were carried out with SEM and ESCA, respectively. The results showed that the atmospheric pressure plasma modifications carried out with both nozzle type plasma and DBD plasma caused topographical and functionality changes on the surfaces of the layer by layer electrospun tissue scaffolds. However, the shelf life studies indicated that the hydrophilicity introduced to the surfaces was mainly because of the functionality changes. Therefore, according to the optimized results, samples treated with nozzle type air plasma modification applied for 9 minutes from a distance of 17 cm and Ar+O2 DBD plasma modification applied for 1 minute under 70 cm3/min O2 flow rate were found to have the highest hydrophilicity compared to pristine samples.

Keywords: biomaterial, chitosan, hybrid, plasma

Procedia PDF Downloads 273
756 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 41
755 Smart Interior Design: A Revolution in Modern Living

Authors: Fatemeh Modirzare

Abstract:

Smart interior design represents a transformative approach to creating living spaces that integrate technology seamlessly into our daily lives, enhancing comfort, convenience, and sustainability. This paper explores the concept of smart interior design, its principles, benefits, challenges, and future prospects. It also highlights various examples and applications of smart interior design to illustrate its potential in shaping the way we live and interact with our surroundings. In an increasingly digitized world, the boundaries between technology and interior design are blurring. Smart interior design, also known as intelligent or connected interior design, involves the incorporation of advanced technologies and automation systems into residential and commercial spaces. This innovative approach aims to make living environments more efficient, comfortable, and adaptable while promoting sustainability and user well-being. Smart interior design seamlessly integrates technology into the aesthetics and functionality of a space, ensuring that devices and systems do not disrupt the overall design. Sustainable materials, energy-efficient systems, and eco-friendly practices are central to smart interior design, reducing environmental impact. Spaces are designed to be adaptable, allowing for reconfiguration to suit changing needs and preferences. Smart homes and spaces offer greater comfort through features like automated climate control, adjustable lighting, and customizable ambiance. Smart interior design can significantly reduce energy consumption through optimized heating, cooling, and lighting systems. Smart interior design integrates security systems, fire detection, and emergency response mechanisms for enhanced safety. Sustainable materials, energy-efficient appliances, and waste reduction practices contribute to a greener living environment. Implementing smart interior design can be expensive, particularly when retrofitting existing spaces with smart technologies. The increased connectivity raises concerns about data privacy and cybersecurity, requiring robust measures to protect user information. Rapid advancements in technology may lead to obsolescence, necessitating updates and replacements. Users must be familiar with smart systems to fully benefit from them, requiring education and ongoing support. Residential spaces incorporate features like voice-activated assistants, automated lighting, and energy management systems. Intelligent office design enhances productivity and employee well-being through smart lighting, climate control, and meeting room booking systems. Hospitals and healthcare facilities use smart interior design for patient monitoring, wayfinding, and energy conservation. Smart retail design includes interactive displays, personalized shopping experiences, and inventory management systems. The future of smart interior design holds exciting possibilities, including AI-powered design tools that create personalized spaces based on user preferences. Smart interior design will increasingly prioritize factors that improve physical and mental health, such as air quality monitoring and mood-enhancing lighting. Smart interior design is revolutionizing the way we interact with our living and working spaces. By embracing technology, sustainability, and user-centric design principles, smart interior design offers numerous benefits, from increased comfort and convenience to energy efficiency and sustainability. Despite challenges, the future holds tremendous potential for further innovation in this field, promising a more connected, efficient, and harmonious way of living and working.

Keywords: smart interior design, home automation, sustainable living spaces, technological integration, user-centric design

Procedia PDF Downloads 66
754 Transverse Behavior of Frictional Flat Belt Driven by Tapered Pulley -Change of Transverse Force Under Driving State–

Authors: Satoko Fujiwara, Kiyotaka Obunai, Kazuya Okubo

Abstract:

A skew is one of important problems for designing the conveyor and transmission with frictional flat belt, in which running belt is deviated in width direction due to the transverse force applied to the belt. The skew often not only degrades the stability of the path of belt but also causes some damages of the belt and auxiliary machines. However, the transverse behavior such as the skew has not been discussed quantitatively in detail for frictional belts. The objective of this study is to clarify the transverse behavior of frictional flat belt driven by tapered pulley. Commercially available rubber flat belt reinforced by polyamide film was prepared as the test belt where the thickness and length were 1.25 mm and 630 mm, respectively. Test belt was driven between two pulleys made of aluminum alloy, where diameter and inter-axial length were 50 mm and 150 mm, respectively. Some tapered pulleys were applied where tapered angles were 0 deg (for comparison), 2 deg, 4 deg, and 6 deg. In order to alternatively investigate the transverse behavior, the transverse force applied to the belt was measured when the skew was constrained at the string under driving state. The transverse force was measured by a load cell having free rollers contacting on the side surface of the belt when the displacement in the belt width direction was constrained. The conditions of observed bending stiffness in-plane of the belt were changed by preparing three types of belts (the width of the belt was 20, 30, and 40 mm) where their observed stiffnesses were changed. The contributions of the bending stiffness in-plane of belt and initial inter-axial force to the transverse were discussed in experiments. The inter-axial force was also changed by setting a distance (about 240 mm) between the two pulleys. Influence of observed bending stiffness in-plane of the belt and initial inter-axial force on the transverse force were investigated. The experimental results showed that the transverse force was increased with an increase of observed bending stiffness in-plane of the belt and initial inter-axial force. The transverse force acting on the belt running on the tapered pulley was classified into multiple components. Those were components of forces applied with the deflection of the inter-axial force according to the change of taper angle, the resultant force by the bending moment applied on the belt winding around the tapered pulley, and the reaction force applied due to the shearing deformation. The calculation result of the transverse force was almost agreed with experimental data when those components were formulated. It was also shown that the most contribution was specified to be the shearing deformation, regardless of the test conditions. This study found that transverse behavior of frictional flat belt driven by tapered pulley was explained by the summation of those components of forces.

Keywords: skew, frictional flat belt, transverse force, tapered pulley

Procedia PDF Downloads 145
753 Cartographic Depiction and Visualization of Wetlands Changes in the North-Western States of India

Authors: Bansal Ashwani

Abstract:

Cartographic depiction and visualization of wetland changes is an important tool to map spatial-temporal information about the wetland dynamics effectively and to comprehend the response of these water bodies in maintaining the groundwater and surrounding ecosystem. This is true for the states of North Western India, i.e., J&K, Himachal, Punjab, and Haryana that are bestowed upon with several natural wetlands in the flood plains or on the courses of its rivers. Thus, the present study documents, analyses and reconstructs the lost wetlands, which existed in the flood plains of the major river basins of these states, i.e., Chenab, Jhelum, Satluj, Beas, Ravi, and Ghagar, in the beginning of the 20th century. To achieve the objective, the study has used multi-temporal datasets since the 1960s using high to medium resolution satellite datasets, e.g., Corona (1960s/70s), Landsat (1990s-2017) and Sentinel (2017). The Sentinel (2017) satellite image has been used for making the wetland inventory owing to its comparatively higher spatial resolution with multi-spectral bands. In addition, historical records, repeated photographs, historical maps, field observations including geomorphological evidence were also used. The water index techniques, i.e., band rationing, normalized difference water index (NDWI), modified NDWI (MNDWI) have been compared and used to map the wetlands. The wetland types found in the north-western states have been categorized under 19 classes suggested by Space Application Centre, India. These enable the researcher to provide with the wetlands inventory and a series of cartographic representation that includes overlaying multiple temporal wetlands extent vectors. A preliminary result shows the general state of wetland shrinkage since the 1960s with varying area shrinkage rate from one wetland to another. In addition, it is observed that majority of wetlands have not been documented so far and even do not have names. Moreover, the purpose is to emphasize their elimination in addition to establishing a baseline dataset that can be a tool for wetland planning and management. Finally, the applicability of cartographic depiction and visualization, historical map sources, repeated photographs and remote sensing data for reconstruction of long term wetlands fluctuations, especially in the northern part of India, will be addressed.

Keywords: cartographic depiction and visualization, wetland changes, NDWI/MDWI, geomorphological evidence and remote sensing

Procedia PDF Downloads 260
752 Human Resource Management Practices and Employee Retention in Public Higher Learning Institutions in the Maldives

Authors: Shaheeb Abdul Azeez, Siong-Choy Chong

Abstract:

Background: Talent retention is increasingly becoming a major challenge for many industries due to the high turnover rate. Public higher learning institutions in the Maldives have a similar situation with the turnover of their employees'. This paper is to identify whether Human Resource Management (HRM) practices have any impact on employee retention in public higher learning institutions in the Maldives. Purpose: This paper aims to identify the influence of HRM practices on employee retention in public higher learning institutions in the Maldives. A total of 15 variables used in this study; 11 HRM practices as independent variables (leadership, rewards, salary, employee participation, compensation, training and development, career development, recognition, appraisal system and supervisor support); job satisfaction and motivation as mediating variables; demographic profile as moderating variable and employee retention as dependent variable. Design/Methodology/Approach: A structured self-administered questionnaire was used for data collection. A total of 300 respondents were selected as the study sample, representing the academic and administrative from public higher learning institutions using a stratified random sampling method. AMOS was used to test the hypotheses constructed. Findings: The results suggest that there is no direct effect between the independent variable and dependent variable. Also, the study concludes that no moderate effects of demographic profile between independent and dependent variables. However, the mediating effects of job satisfaction and motivation in the relationship between HRM practices and employee retention were significant. Salary had a significant influence on job satisfaction, whilst both compensation and recognition have significant influence on motivation. Job satisfaction and motivation were also found to significantly influence employee retention. Research Limitations: The study consists of many variables more time consuming for the respondents to answer the questionnaire. The study is focussed only on public higher learning institutions in the Maldives due to no participation from the private sector higher learning institutions. Therefore, the researcher is unable to identify the actual situation of the higher learning industry in the Maldives. Originality/Value: To our best knowledge, no study has been conducted using the same framework throughout the world. This study is the initial study conducted in the Maldives in this study area and can be used as a baseline for future researches. But there are few types of research conducted on the same subject throughout the world. Some of them concluded with positive findings while others with negative findings. Also, they have used 4 to 7 HRM practices as their study framework.

Keywords: human resource management practices, employee retention, motivation, job satisfaction

Procedia PDF Downloads 154
751 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 264
750 The Role of Glyceryl Trinitrate (GTN) in 99mTc-HIDA with Morphine Provocation Scan for the Investigation of Type III Sphincter of Oddi Dysfunction (SOD)

Authors: Ibrahim M Hassan, Lorna Que, Michael Rutland

Abstract:

Type I SOD is usually diagnosed by anatomical imaging such as ultrasound, CT and MRCP. However, the types II and III SOD yield negative results despite the presence of significant symptoms. In particular, the type III is difficult to diagnose due to the absence of significant biochemical or anatomical abnormalities. Nuclear Medicine can aid in this diagnostic dilemma by demonstrating functional changes in the bile flow. Low dose Morphine (0.04mg/Kg) stimulates the tone of the sphincter of Oddi (SO) and its usefulness has been shown in diagnosing SOD by causing a delay in bile flow when compared to a non morphine provoked - baseline scan. This work expands on that process by using sublingual GTN at 60 minutes post tracer and morphine injection to relax the SO and induce an improvement in bile outflow, and in some cases show immediate relief of morphine induced abdominal pain. The criteria for positive SOD are as follows: if during the first hour of the morphine provocation showed (1) delayed intrahepatic biliary ducts tracer accumulation; plus (2) delayed appearance but persistent retention of activity in the common bile duct, and (3) delayed bile flow into the duodenum. In addition, patients who required GTN within the first hour to relieve abdominal pain were regarded as highly supportive of the diagnosis. Retrospective analysis of 85 patients (pts) (78F and 6M) referred for suspected SOD (type III) who had been intensively investigated because of recurrent right upper quadrant or abdominal pain post cholecystectomy. 99mTc-HIDA scan with morphine-provocation is performed followed by GTN at 60 minutes post tracer injection and a further thirty minutes of dynamic imaging are acquired. 30 pts were negative. 55 pts were regarded as positive for SOD and 38/55 (60%) of these patients with an abnormal result were further evaluated with a baseline 99mTc-HIDA. As expected, all 38 pts showed better bile flow characteristics than during the morphine provocation. 20/55 (36%) patients were treated by ERCP sphincterotomy and the rest were managed conservatively by medical therapy. In all cases regarded as positive for SOD, the sublingual GTN at 60 minutes showed immediate improvement in bile flow. 11/55(20%) who developed severe post-morphine abdominal pain were relieved by GTN almost instantaneously. We propose that GTN is a useful agent in the diagnosis of SOD when performing 99mTc-HIDA scan and that the satisfactory response to the sublingual GTN could offer additional information in patients who have severe pain at the time the procedure or when presenting to the emergency unit because of biliary pain. And also in determining whether a trial of medical therapy may be used before considering surgery.

Keywords: GTN, HIDA, MORPHINE, SOD

Procedia PDF Downloads 300
749 Family-School-Community Engagement: Building a Growth Mindset

Authors: Michelann Parr

Abstract:

Family-school-community engagement enhances family-school-community well-being, collective confidence, and school climate. While it is often referred to as a positive thing in the literature for families, schools, and communities, it does not come without its struggles. While there are numerous things families, schools, and communities do each and every day to enhance engagement, it is often difficult to find our way to true belonging and engagement. Working our way surface level barriers is easy; we can provide childcare, transportation, resources, and refreshments. We can even change the environment so that families will feel welcome, valued, and respected. But there are often mindsets and perpsectives buried deep below the surface, most often grounded in societal, familial, and political norms, expectations, pressures, and narratives. This work requires ongoing energy, commitment, and engagement of all stakeholders, including families, schools, and communities. Each and every day, we need to take a reflective and introspective stance at what is said and done and how it supports the overall goal of family-school-community engagement. And whatever we must occur within a paradigm of care in additional to one of critical thinking and social justice. Families, and those working with families, must not simply accept all that is given, but should instead ask these types of questions: a) How, and by whom, are the current philosophies and practices of family-school engagement interrogated? b) How might digging below surface level meanings support understanding of what is being said and done? c) How can we move toward meaningful and authentic engagement that balances knowledge and power between family, school, district, community (local and global), and government? This type of work requires conscious attention and intentional decision-making at all levels bringing us one step closer to authentic and meaningful partnerships. Strategies useful to building a growth mindset include: a) interrogating and exploring consistencies and inconsistencies by looking at what is done and what is not done through multiple perspectives; b) recognizing that enhancing family-engagement and changing mindsets take place at the micro-level (e.g., family and school), but also require active engagement and awareness at the macro-level (e.g., community agencies, district school boards, government); c) taking action as an advocate or activist. Negative narratives about families, schools, and communities should not be maintained, but instead critical and courageous conversations in and out of school should be initiated and sustained; and d) maintaining consistency, simplicity, and steady progress. All involved in engagement need to be aware of the struggles, but keep them in check with the many successes. Change may not be observed on a day-to-day basis or even immediately, but stepping back and looking from the outside in, might change the view. Working toward a growth mindset will produce better results than a fixed mindset, and this takes time.

Keywords: family engagment, family-school-community engagement, parent engagement, parent involvment

Procedia PDF Downloads 178
748 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 249
747 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 175
746 Understanding Systemic Barriers (and Opportunities) to Increasing Uptake of Subcutaneous Medroxy Progesterone Acetate Self-Injection in Health Facilities in Nigeria

Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu

Abstract:

Background: The DISC project collaborated with partners to implement demand creation and service delivery interventions, including the MoT (Moment of Truth) innovation, in over 500 health facilities across 15 states. This has increased the voluntary conversion rate to self-injection among women who opt for injectable contraception. While some facilities recorded an increasing trend in key performance indicators, few others persistently performed sub-optimally due to provider and system-related barriers. Methodology: Twenty-two facilities performing sub-optimally were selected purposively from three Nigerian states. Low productivity was appraised using low reporting rates and poor SI conversion rates as indicators. Interviews were conducted with health providers across these health facilities using a rapid diagnosis tool. The project also conducted a data quality assessment that evaluated the veracity of data elements reported across the three major sources of family planning data in the facility. Findings: The inability and sometimes refusal of providers to support clients to self-inject effectively was associated with the misunderstanding of its value to their work experience. It was also observed that providers still held a strong influence over clients’ method choices. Furthermore, providers held biases and misconceptions about DMPA-SC that restricted the access of obese clients and new acceptors to services – a clear departure from the recommendations of the national guidelines. Additionally, quality of care standards was compromised because job aids were not used to inform service delivery. Facilities performing sub-optimally often under-reported DMPA-SC utilization data, and there were multiple uncoordinated responsibilities for recording and reporting. Additionally, data validation meetings were not regularly convened, and these meetings were ineffective in authenticating data received from health facilities. Other reasons for sub-optimal performance included poor documentation and tracking of stock inventory resulting in commodity stockouts, low client flow because of poor positioning of health facilities, and ineffective messaging. Some facilities lacked adequate human and material resources to provide services effectively and received very few supportive supervision visits. Supportive supervision visits and Data Quality Audits have been useful to address the aforementioned performance barriers. The project has deployed digital DMPA-SC self-injection checklists that have been aligned with nationally approved templates. During visits, each provider and community mobilizer is accorded special attention by the supervisor until he/she can perform procedures in line with best practice (protocol). Conclusion: This narrative provides a summary of a range of factors that identify health facilities performing sub-optimally in their provision of DMPA-SC services. Findings from this assessment will be useful during project design to inform effective strategies. As the project enters its final stages of implementation, it is transitioning high-impact activities to state institutions in the quest to sustain the quality of service beyond the tenure of the project. The project has flagged activities, as well as created protocols and tools aimed at placing state-level stakeholders at the forefront of improving productivity in health facilities.

Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, barriers, opportunities, performance

Procedia PDF Downloads 76
745 The Path to Ruthium: Insights into the Creation of a New Element

Authors: Goodluck Akaoma Ordu

Abstract:

Ruthium (Rth) represents a theoretical superheavy element with an atomic number of 119, proposed within the context of advanced materials science and nuclear physics. The conceptualization of Rth involves theoretical frameworks that anticipate its atomic structure, including a hypothesized stable isotope, Rth-320, characterized by 119 protons and 201 neutrons. The synthesis of Ruthium (Rth) hinges on intricate nuclear fusion processes conducted in state-of-the-art particle accelerators, notably utilizing Calcium-48 (Ca-48) as a projectile nucleus and Einsteinium-253 (Es-253) as a target nucleus. These experiments aim to induce fusion reactions that yield Ruthium isotopes, such as Rth-301, accompanied by neutron emission. Theoretical predictions outline various physical and chemical properties attributed to Ruthium (Rth). It is envisaged to possess a high density, estimated at around 25 g/cm³, with melting and boiling points anticipated to be exceptionally high, approximately 4000 K and 6000 K, respectively. Chemical studies suggest potential oxidation states of +2, +3, and +4, indicating a versatile reactivity, particularly with halogens and chalcogens. The atomic structure of Ruthium (Rth) is postulated to feature an electron configuration of [Rn] 5f^14 6d^10 7s^2 7p^2, reflecting its position in the periodic table as a superheavy element. However, the creation and study of superheavy elements like Ruthium (Rth) pose significant challenges. These elements typically exhibit very short half-lives, posing difficulties in their stabilization and detection. Research efforts are focused on identifying the most stable isotopes of Ruthium (Rth) and developing advanced detection methodologies to confirm their existence and properties. Specialized detectors are essential in observing decay patterns unique to Ruthium (Rth), such as alpha decay or fission signatures, which serve as key indicators of its presence and characteristics. The potential applications of Ruthium (Rth) span across diverse technological domains, promising innovations in energy production, material strength enhancement, and sensor technology. Incorporating Ruthium (Rth) into advanced energy systems, such as the Arc Reactor concept, could potentially amplify energy output efficiencies. Similarly, integrating Ruthium (Rth) into structural materials, exemplified by projects like the NanoArc gauntlet, could bolster mechanical properties and resilience. Furthermore, Ruthium (Rth)--based sensors hold promise for achieving heightened sensitivity and performance in various sensing applications. Looking ahead, the study of Ruthium (Rth) represents a frontier in both fundamental science and applied research. It underscores the quest to expand the periodic table and explore the limits of atomic stability and reactivity. Future research directions aim to delve deeper into Ruthium (Rth)'s atomic properties under varying conditions, paving the way for innovations in nanotechnology, quantum materials, and beyond. The synthesis and characterization of Ruthium (Rth) stand as a testament to human ingenuity and technological advancement, pushing the boundaries of scientific understanding and engineering capabilities. In conclusion, Ruthium (Rth) embodies the intersection of theoretical speculation and experimental pursuit in the realm of superheavy elements. It symbolizes the relentless pursuit of scientific excellence and the potential for transformative technological breakthroughs. As research continues to unravel the mysteries of Ruthium (Rth), it holds the promise of reshaping materials science and opening new frontiers in technological innovation.

Keywords: superheavy element, nuclear fusion, bombardment, particle accelerator, nuclear physics, particle physics

Procedia PDF Downloads 33
744 Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container

Authors: Mohammad R. Jalali

Abstract:

Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.

Keywords: fluid-structure interactions, free surface sloshing, Gaussian hump, Green-Naghdi equations, numerical predictions

Procedia PDF Downloads 397
743 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 263
742 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy

Authors: Mona Rashidirad, Hamid Salimian

Abstract:

Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.

Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy

Procedia PDF Downloads 238
741 Cognitive Control Moderates the Concurrent Effect of Autistic and Schizotypal Traits on Divergent Thinking

Authors: Julie Ramain, Christine Mohr, Ahmad Abu-Akel

Abstract:

Divergent thinking—a cognitive component of creativity—and particularly the ability to generate unique and novel ideas, has been linked to both autistic and schizotypal traits. However, to our knowledge, the concurrent effect of these trait dimensions on divergent thinking has not been investigated. Moreover, it has been suggested that creativity is associated with different types of attention and cognitive control, and consequently how information is processed in a given context. Intriguingly, consistent with the diametric model, autistic and schizotypal traits have been associated with contrasting attentional and cognitive control styles. Positive schizotypal traits have been associated with reactive cognitive control and attentional flexibility, while autistic traits have been associated with proactive cognitive control and the increased focus of attention. The current study investigated the relationship between divergent thinking, autistic and schizotypal traits and cognitive control in a non-clinical sample of 83 individuals (Males = 42%; Mean age = 22.37, SD = 2.93), sufficient to detect a medium effect size. Divergent thinking was evaluated in an adapted version of-of the Figural Torrance Test of Creative Thinking. Crucially, since we were interested in testing divergent thinking productivity across contexts, participants were asked to generate items from basic shapes in four different contexts. The variance of the proportion of unique to total responses across contexts represented a measure of context adaptability, with lower variance indicating increased context adaptability. Cognitive control was estimated with the Behavioral Proactive Index of the AX-CPT task, with higher scores representing the ability to actively maintain goal-relevant information in a sustained/anticipatory manner. Autistic and schizotypal traits were assessed with the Autism Quotient (AQ) and the Community Assessment of Psychic Experiences (CAPE-42). Generalized linear models revealed a 3-way interaction of autistic and positive schizotypal traits, and proactive cognitive control, associated with increased context adaptability. Specifically, the concurrent effect of autistic and positive schizotypal traits on increased context adaptability was moderated by the level of proactive control and was only significant when proactive cognitive control was high. Our study reveals that autistic and positive schizotypal traits interactively facilitate the capacity to generate unique ideas across various contexts. However, this effect depends on cognitive control mechanisms indicative of the ability to proactively maintain attention when needed. The current results point to a unique profile of divergent thinkers who have the ability to respectively tap both systematic and flexible processing modes within and across contexts. This is particularly intriguing as such combination of phenotypes has been proposed to explain the genius of Beethoven, Nash, and Newton.

Keywords: autism, schizotypy, creativity, cognitive control

Procedia PDF Downloads 133
740 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 231
739 Extraction of Urban Building Damage Using Spectral, Height and Corner Information

Authors: X. Wang

Abstract:

Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.

Keywords: building damage, corner, earthquake, height, very high resolution (VHR)

Procedia PDF Downloads 210
738 Electronic Structure Studies of Mn Doped La₀.₈Bi₀.₂FeO₃ Multiferroic Thin Film Using Near-Edge X-Ray Absorption Fine Structure

Authors: Ghazala Anjum, Farooq Hussain Bhat, Ravi Kumar

Abstract:

Multiferroic materials are vital for new application and memory devices, not only because of the presence of multiple types of domains but also as a result of cross correlation between coexisting forms of magnetic and electrical orders. In spite of wide studies done on multiferroic bulk ceramic materials their realization in thin film form is yet limited due to some crucial problems. During the last few years, special attention has been devoted to synthesis of thin films like of BiFeO₃. As they allow direct integration of the material into the device technology. Therefore owing to the process of exploration of new multiferroic thin films, preparation, and characterization of La₀.₈Bi₀.₂Fe₀.₇Mn₀.₃O₃ (LBFMO3) thin film on LaAlO₃ (LAO) substrate with LaNiO₃ (LNO) being the buffer layer has been done. The fact that all the electrical and magnetic properties are closely related to the electronic structure makes it inevitable to study the electronic structure of system under study. Without the knowledge of this, one may never be sure about the mechanism responsible for different properties exhibited by the thin film. Literature review reveals that studies on change in atomic and the hybridization state in multiferroic samples are still insufficient except few. The technique of x-ray absorption (XAS) has made great strides towards the goal of providing such information. It turns out to be a unique signature to a given material. In this milieu, it is time honoured to have the electronic structure study of the elements present in the LBFMO₃ multiferroic thin film on LAO substrate with buffer layer of LNO synthesized by RF sputtering technique. We report the electronic structure studies of well characterized LBFMO3 multiferroic thin film on LAO substrate with LNO as buffer layer using near-edge X-ray absorption fine structure (NEXAFS). Present exploration has been performed to find out the valence state and crystal field symmetry of ions present in the system. NEXAFS data of O K- edge spectra reveals a slight shift in peak position along with growth in intensities of low energy feature. Studies of Mn L₃,₂- edge spectra indicates the presence of Mn³⁺/Mn⁴⁺ network apart from very small contribution from Mn²⁺ ions in the system that substantiates the magnetic properties exhibited by the thin film. Fe L₃,₂- edge spectra along with spectra of reference compound reveals that Fe ions are present in +3 state. Electronic structure and valence state are found to be in accordance with the magnetic properties exhibited by LBFMO/LNO/LAO thin film.

Keywords: magnetic, multiferroic, NEXAFS, x-ray absorption fine structure, XMCD, x-ray magnetic circular dichroism

Procedia PDF Downloads 151
737 High Acid-Stable α-Amylase Production by Milk in Liquid Culture

Authors: Shohei Matsuo, Saki Mikai, Hiroshi Morita

Abstract:

Objectives: Shochu is a popular Japanese distilled spirits. In the production of shochu, the filamentous fungus Aspergillus kawachii has traditionally been used. A. kawachii produces two types of starch hydrolytic enzymes, α-amylase (enzymatic liquefaction) and glucoamylase (enzymatic saccharification). Liquid culture system is a relatively easy microorganism to ferment with relatively low cost of production compared for solid culture. In liquid culture system, acid-unstable α-amylase (α-A) was produced abundantly, but, acid-stable α-amylase (Aα-A) was not produced. Since there is high enzyme productivity, most in shochu brewing have been adopted by a solid culture method. In this study, therefore, we investigated production of Aα-A in liquid culture system. Materials and methods: Microorganism Aspergillus kawachii NBRC 4308 was used. The mold was cultured at 30 °C for 7~14 d to allow formation of conidiospores on slant agar medium. Liquid Culture System: A. kawachii was cultured in a 100 ml of following altered SLS medium: 1.0 g of rice flour, 0.1 g of K2HPO4, 0.1 g of KCl, 0.6 g of tryptone, 0.05 g of MgSO4・7H2O, 0.001 g of FeSO4・7H2O, 0.0003 g of ZnSO4・7H2O, 0.021 g of CaCl2, 0.33 of citric acid (pH 3.0). The pH of the medium was adjusted to the designated value with 10 % HCl solution. The cultivation was shaking at 30 °C and 200 rpm for 72 h. It was filtered to obtain a crude enzyme solution. Aα-A assay: The crude enzyme solution was analyzed. An acid-stable α-amylase activity was carried out using an α-amylase assay kit (Kikkoman Corporation, Noda, Japan). It was conducted after adding 9 ml of 100 mM acetate buffer (pH 3.0) to 1 ml of the culture product supernatant and acid treatment at 37°C for 1 h. One unit of a-amylase activity was defined as the amount of enzyme that yielded 1 mmol of 2-chloro-4-nitrophenyl 6-azide-6-deoxy-b-maltopentaoside (CNP) per minute. Results and Conclusion: We experimented with co-culture of A. kawachii and lactobacillus in order to get control of pH in altered SLS medium. However, high production of acid-stable α-amylase was not obtained. We experimented with yoghurt or milk made an addition to liquid culture. The result indicated that high production of acid-stable α-amylase (964 U/g-substrate) was obtained when milk made an addition to liquid culture. Phosphate concentration in the liquid medium was a major cause of increased acid-stable α-amylase activity. In liquid culture, acid-stable α-amylase activity was enhanced by milk, but Fats and oils in the milk were oxidized. In addition, Tryptone is not approved as a food additive in Japan. Thus, alter SLS medium added to skim milk excepting for the fats and oils in the milk instead of tryptone. The result indicated that high production of acid-stable α-amylase was obtained with the same effect as milk.

Keywords: acid-stable α-amylase, liquid culture, milk, shochu

Procedia PDF Downloads 283
736 Characterization of Dota-Girentuximab Conjugates for Radioimmunotherapy

Authors: Tais Basaco, Stefanie Pektor, Josue A. Moreno, Matthias Miederer, Andreas Türler

Abstract:

Radiopharmaceuticals based in monoclonal anti-body (mAb) via chemical linkers have become a potential tool in nuclear medicine because of their specificity and the large variability and availability of therapeutic radiometals. It is important to identify the conjugation sites and number of attached chelator to mAb to obtain radioimmunoconjugates with required immunoreactivity and radiostability. Girentuximab antibody (G250) is a potential candidate for radioimmunotherapy of clear cell carcinomas (RCCs) because it is reactive with CAIX antigen, a transmembrane glycoprotein overexpressed on the cell surface of most ( > 90%) (RCCs). G250 was conjugated with the bifunctional chelating agent DOTA (1,4,7,10-Tetraazacyclododecane-N,N’,N’’,N’’’-tetraacetic acid) via a benzyl-thiocyano group as a linker (p-SCN-Bn-DOTA). DOTA-G250 conjugates were analyzed by size exclusion chromatography (SE-HPLC) and by electrophoresis (SDS-PAGE). The potential site-specific conjugation was identified by liquid chromatography–mass spectrometry (LC/MS-MS) and the number of linkers per molecule of mAb was calculated using the molecular weight (MW) measured by matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS). The average number obtained in the conjugates in non-reduced conditions was between 8-10 molecules of DOTA per molecule of mAb. The average number obtained in the conjugates in reduced conditions was between 1-2 and 3-4 molecules of DOTA per molecule of mAb in the light chain (LC) and heavy chain (HC) respectively. Potential DOTA modification sites of the chelator were identified in lysine residues. The biological activity of the conjugates was evaluated by flow cytometry (FACS) using CAIX negative (SKRC-18) and CAIX positive (SKRC-52). The DOTA-G250 conjugates were labelled with 177Lu with a radiochemical yield > 95% reaching specific activities of 12 MBq/µg. The stability in vitro of different types of radioconstructs was analyzed in human serum albumin (HSA). The radiostability of 177Lu-DOTA-G250 at high specific activity was increased by addition of sodium ascorbate after the labelling. The immunoreactivity was evaluated in vitro and in vivo. Binding to CAIX positive cells (SK-RC-52) at different specific activities was higher for conjugates with less DOTA content. Protein dose was optimized in mice with subcutaneously growing SK-RC-52 tumors using different amounts of 177Lu- DOTA-G250.

Keywords: mass spectrometry, monoclonal antibody, radiopharmaceuticals, radioimmunotheray, renal cancer

Procedia PDF Downloads 302