Search results for: slice thickness accuracy
1100 Geospatial Techniques and VHR Imagery Use for Identification and Classification of Slums in Gujrat City, Pakistan
Authors: Muhammad Ameer Nawaz Akram
Abstract:
The 21st century has been revealed that many individuals around the world are living in urban settlements than in rural zones. The evolution of numerous cities in emerging and newly developed countries is accompanied by the rise of slums. The precise definition of a slum varies countries to countries, but the universal harmony is that slums are dilapidated settlements facing severe poverty and have lacked access to sanitation, water, electricity, good living styles, and land tenure. The slum settlements always vary in unique patterns within and among the countries and cities. The core objective of this study is the spatial identification and classification of slums in Gujrat city Pakistan from very high-resolution GeoEye-1 (0.41m) satellite imagery. Slums were first identified using GPS for sample site identification and ground-truthing; through this process, 425 slums were identified. Then Object-Oriented Analysis (OOA) was applied to classify slums on digital image. Spatial analysis softwares, e.g., ArcGIS 10.3, Erdas Imagine 9.3, and Envi 5.1, were used for processing data and performing the analysis. Results show that OOA provides up to 90% accuracy for the identification of slums. Jalal Cheema and Allah Ho colonies are severely affected by slum settlements. The ratio of criminal activities is also higher here than in other areas. Slums are increasing with the passage of time in urban areas, and they will be like a hazardous problem in coming future. So now, the executive bodies need to make effective policies and move towards the amelioration process of the city.Keywords: slums, GPS, satellite imagery, object oriented analysis, zonal change detection
Procedia PDF Downloads 1391099 Numerical Investigation of the Effects of Surfactant Concentrations on the Dynamics of Liquid-Liquid Interfaces
Authors: Bamikole J. Adeyemi, Prashant Jadhawar, Lateef Akanji
Abstract:
Theoretically, there exist two mathematical interfaces (fluid-solid and fluid-fluid) when a liquid film is present on solid surfaces. These interfaces overlap if the mineral surface is oil-wet or mixed wet, and therefore, the effects of disjoining pressure are significant on both boundaries. Hence, dewetting is a necessary process that could detach oil from the mineral surface. However, if the thickness of the thin water film directly in contact with the surface is large enough, disjoining pressure can be thought to be zero at the liquid-liquid interface. Recent studies show that the integration of fluid-fluid interactions with fluid-rock interactions is an important step towards a holistic approach to understanding smart water effects. Experiments have shown that the brine solution can alter the micro forces at oil-water interfaces, and these ion-specific interactions lead to oil emulsion formation. The natural emulsifiers present in crude oil behave as polyelectrolytes when the oil interfaces with low salinity water. Wettability alteration caused by low salinity waterflooding during Enhanced Oil Recovery (EOR) process results from the activities of divalent ions. However, polyelectrolytes are said to lose their viscoelastic property with increasing cation concentrations. In this work, the influence of cation concentrations on the dynamics of viscoelastic liquid-liquid interfaces is numerically investigated. The resultant ion concentrations at the crude oil/brine interfaces were estimated using a surface complexation model. Subsequently, the ion concentration parameter is integrated into a mathematical model to describe its effects on the dynamics of a viscoelastic interfacial thin film. The film growth, stability, and rupture were measured after different time steps for three types of fluids (Newtonian, purely elastic and viscoelastic fluids). The interfacial films respond to exposure time in a similar manner with an increasing growth rate, which resulted in the formation of more droplets with time. Increased surfactant accumulation at the interface results in a higher film growth rate which leads to instability and subsequent formation of more satellite droplets. Purely elastic and viscoelastic properties limit film growth rate and consequent film stability compared to the Newtonian fluid. Therefore, low salinity and reduced concentration of the potential determining ions in injection water will lead to improved interfacial viscoelasticity.Keywords: liquid-liquid interfaces, surfactant concentrations, potential determining ions, residual oil mobilization
Procedia PDF Downloads 1491098 Effect of Pre-bonding Storage Period on Laser-treated Al Surfaces
Authors: Rio Hirakawa, Christian Gundlach, Sven Hartwig
Abstract:
In recent years, the use of aluminium has further expanded and is expected to replace steel in the future as vehicles become lighter and more recyclable in order to reduce greenhouse gas (GHG) emissions and improve fuel economy. In line with this, structures and components are becoming increasingly multi-material, with different materials, including aluminium, being used in combination to improve mechanical utility and performance. A common method of assembling dissimilar materials is mechanical fastening, but it has several drawbacks, such as increased manufacturing processes and the influence of substrate-specific mechanical properties. Adhesive bonding and fusion bonding are methods that overcome the above disadvantages. In these two joining methods, surface pre-treatment of the substrate is always necessary to ensure the strength and durability of the joint. Previous studies have shown that laser surface treatment improves the strength and durability of the joint. Yan et al. showed that laser surface treatment of aluminium alloys changes α-Al2O3 in the oxide layer to γ-Al2O3. As γ-Al2O3 has a large specific surface area, is very porous and chemically active, laser-treated aluminium surfaces are expected to undergo physico-chemical changes over time and adsorb moisture and organic substances from the air or storage atmosphere. The impurities accumulated on the laser-treated surface may be released at the adhesive and bonding interface by the heat input to the bonding system during the joining phase, affecting the strength and durability of the joint. However, only a few studies have discussed the effect of such storage periods on laser-treated surfaces. This paper, therefore, investigates the ageing of laser-treated aluminium alloy surfaces through thermal analysis, electrochemical analysis and microstructural observations.AlMg3 of 0.5 mm and 1.5 mm thickness was cut using a water-jet cutting machine, cleaned and degreased with isopropanol and surface pre-treated with a pulsed fibre laser at 1060 nm wavelength, 70 W maximum power and 55 kHz repetition frequency. The aluminium surface was then analysed using SEM, thermogravimetric analysis (TGA), Fourier transform infrared spectroscopy (FTIR) and cyclic voltammetry (CV) after storage in air for various periods ranging from one day to several months TGA and FTIR analysed impurities adsorbed on the aluminium surface, while CV revealed changes in the true electrochemically active surface area. SEM also revealed visual changes on the treated surface. In summary, the changes in the laser-treated aluminium surface with storage time were investigated, and the final results were used to determine the appropriate storage period.Keywords: laser surface treatment, pre-treatment, adhesion, bonding, corrosion, durability, dissimilar material interface, automotive, aluminium alloys
Procedia PDF Downloads 841097 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision
Procedia PDF Downloads 3121096 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection
Authors: Devadrita Dey Sarkar
Abstract:
Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)
Procedia PDF Downloads 4571095 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 3601094 Synthetic Data-Driven Prediction Using GANs and LSTMs for Smart Traffic Management
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Smart cities and intelligent transportation systems rely heavily on effective traffic management and infrastructure planning. This research tackles the data scarcity challenge by generating realistically synthetic traffic data from the PeMS-Bay dataset, enhancing predictive modeling accuracy and reliability. Advanced techniques like TimeGAN and GaussianCopula are utilized to create synthetic data that mimics the statistical and structural characteristics of real-world traffic. The future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is anticipated to capture both spatial and temporal correlations, further improving data quality and realism. Each synthetic data generation model's performance is evaluated against real-world data to identify the most effective models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are employed to model and predict complex temporal dependencies within traffic patterns. This holistic approach aims to identify areas with low vehicle counts, reveal underlying traffic issues, and guide targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study facilitates data-driven decision-making that improves urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory (LSTM), synthetic data generation, traffic management
Procedia PDF Downloads 191093 Impact of Microwave and Air Velocity on Drying Kinetics and Rehydration of Potato Slices
Authors: Caiyun Liu, A. Hernandez-Manas, N. Grimi, E. Vorobiev
Abstract:
Drying is one of the most used methods for food preservation, which extend shelf life of food and makes their transportation, storage and packaging easier and more economic. The commonly dried method is hot air drying. However, its disadvantages are low energy efficiency and long drying times. Because of the high temperature during the hot air drying, the undesirable changes in pigments, vitamins and flavoring agents occur which result in degradation of the quality parameters of the product. Drying process can also cause shrinkage, case hardening, dark color, browning, loss of nutrients and others. Recently, new processes were developed in order to avoid these problems. For example, the application of pulsed electric field provokes cell membrane permeabilisation, which increases the drying kinetics and moisture diffusion coefficient. Microwave drying technology has also several advantages over conventional hot air drying, such as higher drying rates and thermal efficiency, shorter drying time, significantly improved product quality and nutritional value. Rehydration kinetics of dried product is a very important characteristic of dried products. Current research has indicated that the rehydration ratio and the coefficient of rehydration are dependent on the processing conditions of drying. The present study compares the efficiency of two processes (1: room temperature air drying, 2: microwave/air drying) in terms of drying rate, product quality and rehydration ratio. In this work, potato slices (≈2.2g) with a thickness of 2 mm and diameter of 33mm were placed in the microwave chamber and dried. Drying kinetics and drying rates of different methods were determined. The process parameters included inlet air velocity (1 m/s, 1.5 m/s, 2 m/s) and microwave power (50 W, 100 W, 200 W and 250 W) were studied. The evolution of temperature during microwave drying was measured. The drying power had a strong effect on drying rate, and the microwave-air drying resulted in 93% decrease in the drying time when the air velocity was 2 m/s and the power of microwave was 250 W. Based on Lewis model, drying rate constants (kDR) were determined. It was observed an increase from kDR=0.0002 s-1 to kDR=0.0032 s-1 of air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The effective moisture diffusivity was calculated by using Fick's law. The results show an increase of effective moisture diffusivity from 7.52×10-11 to 2.64×10-9 m2.s-1 for air velocity of 2 m/s and microwave/air (at 2m/s and 250W) respectively. The temperature of the potato slices increased for higher microwaves power, but decreased for higher air velocity. The rehydration ratio, defined as the weight of the the sample after rehydration per the weight of dried sample, was determined at different water temperatures (25℃, 50℃, 75℃). The rehydration ratio increased with the water temperature and reached its maximum at the following conditions: 200 W for the microwave power, 2 m/s for the air velocity and 75°C for the water temperature. The present study shows the interest of microwave drying for the food preservation.Keywords: drying, microwave, potato, rehydration
Procedia PDF Downloads 2731092 Fault Tolerant Control of the Dynamical Systems Based on Internal Structure Systems
Authors: Seyed Mohammad Hashemi, Shahrokh Barati
Abstract:
The problem of fault-tolerant control (FTC) by accommodation method has been studied in this paper. The fault occurs in any system components such as actuators, sensors or internal structure of the system and leads to loss of performance and instability of the system. When a fault occurs, the purpose of the fault-tolerant control is designate strategy that can keep the control loop stable and system performance as much as possible perform it without shutting down the system. Here, the section of fault detection and isolation (FDI) system has been evaluated with regard to actuator's fault. Designing a fault detection and isolation system for a multi input-multi output (MIMO) is done by an unknown input observer, so the system is divided to several subsystems as the effect of other inputs such as disturbing given system state equations. In this observer design method, the effect of these disturbances will weaken and the only fault is detected on specific input. The results of this approach simulation can confirm the ability of the fault detection and isolation system design. After fault detection and isolation, it is necessary to redesign controller based on a suitable modification. In this regard after the use of unknown input observer theory and obtain residual signal and evaluate it, PID controller parameters redesigned for iterative. Stability of the closed loop system has proved in the presence of this method. Also, In order to soften the volatility caused by Annie variations of the PID controller parameters, modifying Sigma as a way acceptable solution used. Finally, the simulation results of three tank popular example confirm the accuracy of performance.Keywords: fault tolerant control, fault detection and isolation, actuator fault, unknown input observer
Procedia PDF Downloads 4581091 Using Artificial Intelligence Technology to Build the User-Oriented Platform for Integrated Archival Service
Authors: Lai Wenfang
Abstract:
Tthis study will describe how to use artificial intelligence (AI) technology to build the user-oriented platform for integrated archival service. The platform will be launched in 2020 by the National Archives Administration (NAA) in Taiwan. With the progression of information communication technology (ICT) the NAA has built many systems to provide archival service. In order to cope with new challenges, such as new ICT, artificial intelligence or blockchain etc. the NAA will try to use the natural language processing (NLP) and machine learning (ML) skill to build a training model and propose suggestions based on the data sent to the platform. NAA expects the platform not only can automatically inform the sending agencies’ staffs which records catalogues are against the transfer or destroy rules, but also can use the model to find the details hidden in the catalogues and suggest NAA’s staff whether the records should be or not to be, to shorten the auditing time. The platform keeps all the users’ browse trails; so that the platform can predict what kinds of archives user could be interested and recommend the search terms by visualization, moreover, inform them the new coming archives. In addition, according to the Archives Act, the NAA’s staff must spend a lot of time to mark or remove the personal data, classified data, etc. before archives provided. To upgrade the archives access service process, the platform will use some text recognition pattern to black out automatically, the staff only need to adjust the error and upload the correct one, when the platform has learned the accuracy will be getting higher. In short, the purpose of the platform is to deduct the government digital transformation and implement the vision of a service-oriented smart government.Keywords: artificial intelligence, natural language processing, machine learning, visualization
Procedia PDF Downloads 1821090 A Study on the Shear-Induced Crystallization of Aliphatic-Aromatic Copolyester
Authors: Ramin Hosseinnezhad, Iurii Vozniak, Andrzej Galeski
Abstract:
Shear-induced crystallization, originated from orientation of chains along the flow direction, is an inevitable part of most polymer processing technologies. It plays a dominant role in determining the final product properties and is affected by many factors such as shear rate, cooling rate, total strain, etc. Investigation of the shear-induced crystallization process become of great importance for preparation of nanocomposite, which requires crystallization of nanofibrous sheared inclusions at higher temperatures. Thus, the effects of shear time, shear rate, and also thermal condition of cooling on crystallization of two aliphatic-aromatic copolyesters have been investigated. This was performed using Linkam optical shearing system (CSS450) for both Ecoflex® F Blend C1200 produced by BASF and synthesized copolyester of butylene terephthalate and a mixture of butylene esters: adipate, succinate, and glutarate, (PBASGT), containing 60% of aromatic comonomer. Crystallization kinetics of these biodegradable copolyesters was studied at two different conditions of shearing. First, sample with a thickness of 60µm was heated to 60˚C above its melting point and subsequently subjected to different shear rates (100–800 sec-1) while cooling with specific rates. Second, the same type of sample was cooled down when shearing at constant temperature was finished. The intensity of transmitted depolarized light, recorded by a camera attached to the optical microscope, was used as a measure to follow the crystallization. Temperature dependencies of conversion degree of samples during cooling were collected and used to determine the half-temperature (Th), at which 50% conversion degree was reached. Shearing ecoflex films for 45 seconds with a shear rate of 100 sec-1 resulted in significant increase of Th from 56˚C to 70˚C. Moreover, the temperature range for the transition of molten samples to crystallized state decreased from 42˚C to 20˚C. Comparatively low shift of 10˚C in Th towards higher temperature was observed for PBASGT films at shear rate of 600 sec-1 for 45 seconds. However, insufficient melt flow strength and non-laminar flow due to Taylor vortices was a hindrance to reach more elevated Th at very high shear rates (600–800 sec-1). The shift in Th was smaller for the samples sheared at a constant temperature and subsequently cooled down. This may be attributed to the longer time gap between cessation of shearing and the onset of crystallization. The longer this time gap, the more possibility for crystal nucleus to re-melt at temperatures above Tm and for polymer chains to recoil and relax. It is found that the crystallization temperature, crystallization induction time and spherulite growth of aliphatic-aromatic copolyesters are dramatically influenced by both the cooling rate and the shear imposed during the process.Keywords: induced crystallization, shear rate, aliphatic-aromatic copolyester, ecoflex
Procedia PDF Downloads 4521089 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators
Authors: Wei Zhang
Abstract:
With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN
Procedia PDF Downloads 1311088 Optimization of MAG Welding Process Parameters Using Taguchi Design Method on Dead Mild Steel
Authors: Tadele Tesfaw, Ajit Pal Singh, Abebaw Mekonnen Gezahegn
Abstract:
Welding is a basic manufacturing process for making components or assemblies. Recent welding economics research has focused on developing the reliable machinery database to ensure optimum production. Research on welding of materials like steel is still critical and ongoing. Welding input parameters play a very significant role in determining the quality of a weld joint. The metal active gas (MAG) welding parameters are the most important factors affecting the quality, productivity and cost of welding in many industrial operations. The aim of this study is to investigate the optimization process parameters for metal active gas welding for 60x60x5mm dead mild steel plate work-piece using Taguchi method to formulate the statistical experimental design using semi-automatic welding machine. An experimental study was conducted at Bishoftu Automotive Industry, Bishoftu, Ethiopia. This study presents the influence of four welding parameters (control factors) like welding voltage (volt), welding current (ampere), wire speed (m/min.), and gas (CO2) flow rate (lit./min.) with three different levels for variability in the welding hardness. The objective functions have been chosen in relation to parameters of MAG welding i.e., welding hardness in final products. Nine experimental runs based on an L9 orthogonal array Taguchi method were performed. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the welding characteristics of dead mild steel plate and used in order to obtain optimum levels for every input parameter at 95% confidence level. The optimal parameters setting was found is welding voltage at 22 volts, welding current at 125 ampere, wire speed at 2.15 m/min and gas flow rate at 19 l/min by using the Taguchi experimental design method within the constraints of the production process. Finally, six conformations welding have been carried out to compare the existing values; the predicated values with the experimental values confirm its effectiveness in the analysis of welding hardness (quality) in final products. It is found that welding current has a major influence on the quality of welded joints. Experimental result for optimum setting gave a better hardness of welding condition than initial setting. This study is valuable for different material and thickness variation of welding plate for Ethiopian industries.Keywords: Weld quality, metal active gas welding, dead mild steel plate, orthogonal array, analysis of variance, Taguchi method
Procedia PDF Downloads 4861087 Study on the Process of Detumbling Space Target by Laser
Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming
Abstract:
The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.Keywords: detumbling, laser ablation drive, space target, space debris remove
Procedia PDF Downloads 881086 Methodologies for Crack Initiation in Welded Joints Applied to Inspection Planning
Authors: Guang Zou, Kian Banisoleiman, Arturo González
Abstract:
Crack initiation and propagation threatens structural integrity of welded joints and normally inspections are assigned based on crack propagation models. However, the approach based on crack propagation models may not be applicable for some high-quality welded joints, because the initial flaws in them may be so small that it may take long time for the flaws to develop into a detectable size. This raises a concern regarding the inspection planning of high-quality welded joins, as there is no generally acceptable approach for modeling the whole fatigue process that includes the crack initiation period. In order to address the issue, this paper reviews treatment methods for crack initiation period and initial crack size in crack propagation models applied to inspection planning. Generally, there are four approaches, by: 1) Neglecting the crack initiation period and fitting a probabilistic distribution for initial crack size based on statistical data; 2) Extrapolating the crack propagation stage to a very small fictitious initial crack size, so that the whole fatigue process can be modeled by crack propagation models; 3) Assuming a fixed detectable initial crack size and fitting a probabilistic distribution for crack initiation time based on specimen tests; and, 4) Modeling the crack initiation and propagation stage separately using small crack growth theories and Paris law or similar models. The conclusion is that in view of trade-off between accuracy and computation efforts, calibration of a small fictitious initial crack size to S-N curves is the most efficient approach.Keywords: crack initiation, fatigue reliability, inspection planning, welded joints
Procedia PDF Downloads 3551085 Progress of Legislation in Post-Colonial, Post-Communist and Socialist Countries for the Intellectual Property Protection of the Autonomous Output of Artificial Intelligence
Authors: Ammar Younas
Abstract:
This paper is an attempt to explore the legal progression in procedural laws related to “intellectual property protection for the autonomous output of artificial intelligence” in Post-Colonial, Post-Communist and Socialist Countries. An in-depth study of legal progression in Pakistan (Common Law), Uzbekistan (Post-Soviet Civil Law) and China (Socialist Law) has been conducted. A holistic attempt has been made to explore that how the ideological context of the legal systems can impact, not only on substantive components but on the procedural components of the formal laws related to IP Protection of autonomous output of Artificial Intelligence. Moreover, we have tried to shed a light on the prospective IP laws and AI Policy in the countries, which are planning to incorporate the concept of “Digital Personality” in their legal systems. This paper will also address the question: “How far IP of autonomous output of AI can be protected with the introduction of “Non-Human Legal Personality” in legislation?” By using the examples of China, Pakistan and Uzbekistan, a case has been built to highlight the legal progression in General Provisions of Civil Law, Artificial Intelligence Policy of the country and Intellectual Property laws. We have used a range of multi-disciplinary concepts and examined them on the bases of three criteria: accuracy of legal/philosophical presumption, applying to the real time situations and testing on rational falsification tests. It has been observed that the procedural laws are designed in a way that they can be seen correlating with the ideological contexts of these countries.Keywords: intellectual property, artificial intelligence, digital personality, legal progression
Procedia PDF Downloads 1231084 Determination of the Informativeness of Instrumental Research Methods in Assessing Risk Factors for the Development of Renal Dysfunction in Elderly Patients with Chronic Ischemic Heart Disease
Authors: Aksana N. Popel, Volha A. Sujayeva, Olga V. Kоshlataja, Irеna S. Karpava
Abstract:
Introduction: It is a known fact that cardiovascular pathology and its complications cause a more severe course and worse prognosis in patients with comorbid kidney pathology. Chronic kidney disease (CKD) is associated with inflammation, endothelial dysfunction, and increased activity of the sympathoadrenal system. This circumstance increases the risk of cardiovascular diseases and the progression of kidney pathology. The above determines the need to identify cardiorenal changes at early stages to reduce the risks of cardiovascular complications and the progression of CKD. Objective: To identify risk factors (RF) for the development of CKD in elderly patients with chronic ischemic heart disease (CIHD). Methods: The study included 64 patients (40 women and 24 men) with a mean age of 74.4±4.5 years with coronary heart disease, without a history of structural kidney pathology and CKD. All patients underwent transthoracic echocardiography (TTE) and kidney ultrasound (KU) using GE Vivid 9 equipment (GE HealthCare, USA), and cardiac computed tomography (CCT) using Siemens Somatom Force equipment (Siemens Healthineers AG, Germany) in 3 months and in 1 year. Data obtained were analyzed using multiple regression analysis and nonparametric Mann-Whitney test. Statistical analysis was performed using the STATISTICA 12.0 program (StatSoft Inc.). Results: Initially, CKD was not diagnosed in all patients. In 3 months, CKD was diagnosed: stage C1 had 11 people (18%), stage C2 had 4 people (6%), stage C3A had 11 people (18%), stage C3B had 2 people (3%). After 1 year, CKD was diagnosed: stage C1 had 22 people (35%), stage C2 had 5 people (8%), stage C3A had 17 people (27%), stage C3B had 10 people (15%). In 3 months, statistically significant (p<0.05) risk factors were: 1) according to TTE: mitral peak E-wave velocity (U=678, p=0.039), mitral E-velocity DT (U=514, p=0.0168), mitral peak A-wave velocity (U=682, p=0.013). In 1 year, statistically significant (p<0.05) risk factors were: according to TTE: left ventricular (LV) end-systolic volume in B-mode (U=134, p=0.006), LV end-diastolic volume in B-mode (U=177, p=0.04), LV ejection fraction in B-mode (U=135, p=0.006), left atrial volume (U=178, p=0.021), LV hypertrophy (U=294, p=0.04), mitral valve (MV) fibrosis (U=328, p=0.01); according CCT: epicardial fat thickness (EFT) on the right ventricle (U=8, p=0.015); according to KU: interlobar renal artery resistance index (RI) (U=224, p=0.02), segmental renal artery RI (U=409, p=0.016). Conclusions: Both TTE and KU are very informative methods to determine the additional risk factors of CKD development and progression. The most informative risk factors were LV global systolic and diastolic functions, LV and LA volumes. LV hypertrophy, MV fibrosis, interlobar renal artery and segmental renal artery RIs, EFT.Keywords: chronic kidney disease, ischemic heart disease, prognosis, risk factors
Procedia PDF Downloads 301083 A Sharp Interface Model for Simulating Seawater Intrusion in the Coastal Aquifer of Wadi Nador (Algeria)
Authors: Abdelkader Hachemi, Boualem Remini
Abstract:
Seawater intrusion is a significant challenge faced by coastal aquifers in the Mediterranean basin. This study aims to determine the position of the sharp interface between seawater and freshwater in the aquifer of Wadi Nador, located in the Wilaya of Tipaza, Algeria. A numerical areal sharp interface model using the finite element method is developed to investigate the spatial and temporal behavior of seawater intrusion. The aquifer is assumed to be homogeneous and isotropic. The simulation results are compared with geophysical prospection data obtained through electrical methods in 2011 to validate the model. The simulation results demonstrate a good agreement with the geophysical prospection data, confirming the accuracy of the sharp interface model. The position of the sharp interface in the aquifer is found to be approximately 1617 meters from the sea. Two scenarios are proposed to predict the interface position for the year 2024: one without pumping and the other with pumping. The results indicate a noticeable retreat of the sharp interface position in the first scenario, while a slight decline is observed in the second scenario. The findings of this study provide valuable insights into the dynamics of seawater intrusion in the Wadi Nador aquifer. The predicted changes in the sharp interface position highlight the potential impact of pumping activities on the aquifer's vulnerability to seawater intrusion. This study emphasizes the importance of implementing measures to manage and mitigate seawater intrusion in coastal aquifers. The sharp interface model developed in this research can serve as a valuable tool for assessing and monitoring the vulnerability of aquifers to seawater intrusion.Keywords: seawater intrusion, sharp interface, coastal aquifer, algeria
Procedia PDF Downloads 1241082 Applications of Out-of-Sequence Thrust Movement for Earthquake Mitigation: A Review
Authors: Rajkumar Ghosh
Abstract:
The study presents an overview of the many uses and approaches for estimating out-of-sequence thrust movement in earthquake mitigation. The study investigates how knowing and forecasting thrust movement during seismic occurrences might assist to effective earthquake mitigation measures. The review begins by discussing out-of-sequence thrust movement and its importance in earthquake mitigation strategies. It explores how typical techniques of estimating thrust movement may not capture the full complexity of seismic occurrences and emphasizes the benefits of include out-of-sequence data in the analysis. A thorough review of existing research and studies on out-of-sequence thrust movement estimates for earthquake mitigation. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources such as GPS measurements, satellite imagery, and seismic recordings. The study also examines the use of out-of-sequence thrust movement estimates in earthquake mitigation measures. It investigates how precise calculation of thrust movement may help improve structural design, analyse infrastructure risk, and develop early warning systems. The potential advantages of using out-of-sequence data in these applications to improve the efficiency of earthquake mitigation techniques. The difficulties and limits of estimating out-of-sequence thrust movement for earthquake mitigation. It addresses data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and increase the accuracy and reliability of out-of-sequence thrust movement estimates, the authors recommend topics for additional study and improvement. The study is a helpful resource for seismic monitoring and earthquake risk assessment researchers, engineers, and policymakers, supporting innovations in earthquake mitigation measures based on a better knowledge of thrust movement dynamics.Keywords: earthquake mitigation, out-of-sequence thrust, satellite imagery, seismic recordings, GPS measurements
Procedia PDF Downloads 901081 Automatic Detection and Filtering of Negative Emotion-Bearing Contents from Social Media in Amharic Using Sentiment Analysis and Deep Learning Methods
Authors: Derejaw Lake Melie, Alemu Kumlachew Tegegne
Abstract:
The increasing prevalence of social media in Ethiopia has exacerbated societal challenges by fostering the proliferation of negative emotional posts and comments. Illicit use of social media has further exacerbated divisions among the population. Addressing these issues through manual identification and aggregation of emotions from millions of users for swift decision-making poses significant challenges, particularly given the rapid growth of Amharic language usage on social platforms. Consequently, there is a critical need to develop an intelligent system capable of automatically detecting and categorizing negative emotional content into social, religious, and political categories while also filtering out toxic online content. This paper aims to leverage sentiment analysis techniques to achieve automatic detection and filtering of negative emotional content from Amharic social media texts, employing a comparative study of deep learning algorithms. The study utilized a dataset comprising 29,962 comments collected from social media platforms using comment exporter software. Data pre-processing techniques were applied to enhance data quality, followed by the implementation of deep learning methods for training, testing, and evaluation. The results showed that CNN, GRU, LSTM, and Bi-LSTM classification models achieved accuracies of 83%, 50%, 84%, and 86%, respectively. Among these models, Bi-LSTM demonstrated the highest accuracy of 86% in the experiment.Keywords: negative emotion, emotion detection, social media filtering sentiment analysis, deep learning.
Procedia PDF Downloads 401080 Design Study on a Contactless Material Feeding Device for Electro Conductive Workpieces
Authors: Oliver Commichau, Richard Krimm, Bernd-Arno Behrens
Abstract:
A growing demand on the production rate of modern presses leads to higher stroke rates. Commonly used material feeding devices for presses like grippers and roll-feeding systems can only achieve high stroke rates along with high gripper forces, to avoid stick-slip. These forces are limited by the sensibility of the surfaces of the workpieces. Stick-slip leads to scratches on the surface and false positioning of the workpiece. In this paper, a new contactless feeding device is presented, which develops higher feeding force without damaging the surface of the workpiece through gripping forces. It is based on the principle of the linear induction motor. A primary part creates a magnetic field and induces eddy currents in the electrically conductive material. A Lorentz-Force applies to the workpiece in feeding direction as a mutual reaction between the eddy-currents and the magnetic induction. In this study, the FEA model of this approach is shown. The calculation of this model was used to identify the influence of various design parameters on the performance of the feeder and thus showing the promising capabilities and limits of this technology. In order to validate the study, a prototype of the feeding device has been built. An experimental setup was used to measure pulling forces and placement accuracy of the experimental feeder in order to give an outlook of a potential industrial application of this approach.Keywords: conductive material, contactless feeding, linear induction, Lorentz-Force
Procedia PDF Downloads 1831079 Assessing the NYC's Single-Family Housing Typology for Urban Heat Vulnerability and Occupants’ Health Risk under the Climate Change Emergency
Authors: Eleni Stefania Kalapoda
Abstract:
Recurring heat waves due to the global climate change emergency pose continuous risks to human health and urban resources. Local and state decision-makers incorporate Heat Vulnerability Indices (HVIs) to quantify and map the relative impact on human health in emergencies. These maps enable government officials to identify the highest-risk districts and to concentrate emergency planning efforts and available resources accordingly (e.g., to reevaluate the location and the number of heat-relief centers). Even though the framework of conducting an HVI is unique per municipality, its accuracy in assessing the heat risk is limited. To resolve this issue, varied housing-related metrics should be included. This paper quantifies and classifies NYC’s single detached housing typology within high-vulnerable NYC districts using detailed energy simulations and post-processing calculations. The results show that the variation in indoor heat risk depends significantly on the dwelling’s design/operation characteristics, concluding that low-ventilated dwellings are the most vulnerable ones. Also, it confirmed that when building-level determinants of exposure are excluded from the assessment, HVI fails to capture important components of heat vulnerability. Lastly, the overall vulnerability ratio of the housing units was calculated between 0.11 to 1.6 indoor heat degrees in terms of ventilation and shading capacity, insulation degree, and other building attributes.Keywords: heat vulnerability index, energy efficiency, urban heat, resiliency to heat, climate adaptation, climate mitigation, building energy
Procedia PDF Downloads 861078 New Off-Line SPE-GC-MS/MS Method for Determination of Mineral Oil Saturated Hydrocarbons/Mineral Oil Hydrocarbons in Animal Feed, Foods, Infant Formula and Vegetable Oils
Authors: Ovanes Chakoyan
Abstract:
MOH (mineral oil hydrocarbons), which consist of mineral oil saturated hydrocarbons(MOSH) and mineral oil aromatic hydrocarbons(MOAH), are present in various products such as vegetable oils, animal feed, foods, and infant formula. Contamination of foods with mineral oil hydrocarbons, particularly mineral oil aromatic hydrocarbons(MOAH), exhibiting carcinogenic, mutagenic, and hormone-disruptive effects. Identifying toxic substances among the many thousands comprising mineral oils in food samples is a difficult analytical challenge. A method based on an offline-solid phase extraction approach coupled with gas chromatography-triple quadrupole(GC-MS/MS) was developed for the determination of MOSH/MOAH in various products such as vegetable oils, animal feed, foods, and infant formula. A glass solid phase extraction cartridge loaded with 7 g of activated silica gel impregnated with 10 % silver nitrate for removal of olefins and lipids. The MOSH/MOAH fractions were eluated with hexane and hexane: dichloromethane : toluene, respectively. Each eluate was concentrated to 50 µl in toluene and injected on splitless mode into GC-MS/MS. Accuracy of the method was estimated as measurement of recovery of spiked oil samples at 2.0, 15.0, and 30.0 mg kg -1, and recoveries varied from 85 to 105 %. The method was applied to the different types of samples (sunflower meal, chocolate ships, santa milk chocolate, biscuits, infant milk, cornflakes, refined sunflower oil, crude sunflower oil), detecting MOSH up to 56 mg/kg and MOAH up to 5 mg/kg. The limit of quantification(LOQ) of the proposed method was estimated at 0.5 mg/kg and 0.3 mg/kg for MOSH and MOAH, respectively.Keywords: MOSH, MOAH, GC-MS/MS, foods, solid phase extraction
Procedia PDF Downloads 961077 Analytical Study and Conservation Processes of a Wooden Coffin of Middel Kingdom, Ancient Egypt
Authors: Mohamed Ahmed Abd El Kader
Abstract:
This paper describes the conservation processes of an Ancient Egyptian wooden coffin dating back to the Middle Kingdom, ancient Egypt, using several scientific and analytical methods in order to provide a deeper understanding of the deterioration status and a greater awareness of how well preserved the object is. Visual observation and 2D Programs, as well as Optical Microscopy (OM), Environmental scanning Electron Microscopy (ESEM), X-ray Diffraction (XRD) and Fourier Transform Infrared Spectroscopy (FTIR) were used in our study. The identification of wood species and the composition of the pigments and previous restoration materials were made. The coffin was previously conserved and stored in improper conditions, which led to its further deterioration; the surface of the lid dust, which obscured the decorations as well as all necessary restoration work was promptly carried out as soon as the coffin was transferred from the display hall from the Egyptian Museum to the Wood Conservation Laboratory of the Grand Egyptian Museum-Conservation Center (GEM-CC). The analyses provided detailed information concerning the original materials and the materials added during the previous treatment interventions, which was considered when applying the conservation plan. Conservation procedures have been applied with high accuracy to conserve the coffin including cleaning, consolidation of fragile painted layers, and the wooden boards forming the sides of the coffin were reassembled in their original positions. The materials and methods that were applied were extremely effective in stability and reinforcement of the coffin without harmfulness to the original materials and the coffin was successfully conserved and ready to display in the Grand Egyptian Museum (GEM).Keywords: coffin, middle kingdom, deterioration, 2d program
Procedia PDF Downloads 571076 Opinion Mining to Extract Community Emotions on Covid-19 Immunization Possible Side Effects
Authors: Yahya Almurtadha, Mukhtar Ghaleb, Ahmed M. Shamsan Saleh
Abstract:
The world witnessed a fierce attack from the Covid-19 virus, which affected public life socially, economically, healthily and psychologically. The world's governments tried to confront the pandemic by imposing a number of precautionary measures such as general closure, curfews and social distancing. Scientists have also made strenuous efforts to develop an effective vaccine to train the immune system to develop antibodies to combat the virus, thus reducing its symptoms and limiting its spread. Artificial intelligence, along with researchers and medical authorities, has accelerated the vaccine development process through big data processing and simulation. On the other hand, one of the most important negatives of the impact of Covid 19 was the state of anxiety and fear due to the blowout of rumors through social media, which prompted governments to try to reassure the public with the available means. This study aims to proposed using Sentiment Analysis (AKA Opinion Mining) and deep learning as efficient artificial intelligence techniques to work on retrieving the tweets of the public from Twitter and then analyze it automatically to extract their opinions, expression and feelings, negatively or positively, about the symptoms they may feel after vaccination. Sentiment analysis is characterized by its ability to access what the public post in social media within a record time and at a lower cost than traditional means such as questionnaires and interviews, not to mention the accuracy of the information as it comes from what the public expresses voluntarily.Keywords: deep learning, opinion mining, natural language processing, sentiment analysis
Procedia PDF Downloads 1761075 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images
Authors: Sophia Shi
Abstract:
Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG
Procedia PDF Downloads 1361074 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study
Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed
Abstract:
This paper compares the substructure and direct methods for soil-structure interaction (SSI) analysis in the time domain. In the substructure SSI method, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the structure-soil system. To explore the potential limitations of the substructure modeling process, a two-dimensional reinforced concrete frame structure is modeled using substructure and direct methods in this study. The results show discrepancies between the simulated responses of the substructure and the direct approaches. To isolate the effects of higher modal responses, the same study is repeated using a harmonic input motion, in which a similar discrepancy is still observed between the substructure and direct approaches. It is concluded that the main source of discrepancy between the substructure and direct SSI approaches is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall be developed. This refined impedance function is expected to significantly improve the simulation accuracy of the substructure approach for structural systems whose behavior is dominated by the fundamental mode response.Keywords: direct approach, impedance function, soil-structure interaction, substructure approach
Procedia PDF Downloads 1221073 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis
Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha
Abstract:
Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier
Procedia PDF Downloads 4701072 Evolution and Merging of Double-Diffusive Layers in a Vertically Stable Compositional Field
Authors: Ila Thakur, Atul Srivastava, Shyamprasad Karagadde
Abstract:
The phenomenon of double-diffusive convection is driven by density gradients created by two different components (e.g., temperature and concentration) having different molecular diffusivities. The evolution of horizontal double-diffusive layers (DDLs) is one of the outcomes of double-diffusive convection occurring in a laterally/vertically cooled rectangular cavity having a pre-existing vertically stable composition field. The present work mainly focuses on different characteristics of the formation and merging of double-diffusive layers by imposing lateral/vertical thermal gradients in a vertically stable compositional field. A CFD-based twodimensional fluent model has been developed for the investigation of the aforesaid phenomena. The configuration containing vertical thermal gradients shows the evolution and merging of DDLs, where, elements from the same horizontal plane move vertically and mix with surroundings, creating a horizontal layer. In the configuration of lateral thermal gradients, a specially oriented convective roll was found inside each DDL and each roll was driven by the competing density change due to the already existing composition field and imposed thermal field. When the thermal boundary layer near the vertical wall penetrates the salinity interface, it can disrupt the compositional interface and can lead to layer merging. Different analytical scales were quantified and compared for both configurations. Various combinations of solutal and thermal Rayleigh numbers were investigated to get three different regimes, namely; stagnant regime, layered regime and unicellular regime. For a particular solutal Rayleigh number, a layered structure can originate only for a range of thermal Rayleigh numbers. Lower thermal Rayleigh numbers correspond to a diffusion-dominated stagnant regime. Very high thermal Rayleigh corresponds to a unicellular regime with high convective mixing. Different plots identifying these three regimes, number, thickness and time of existence of DDLs have been studied and plotted. For a given solutal Rayleigh number, an increase in thermal Rayleigh number increases the width but decreases both the number and time of existence of DDLs in the fluid domain. Sudden peaks in the velocity and heat transfer coefficient have also been observed and discussed at the time of merging. The present study is expected to be useful in correlating the double-diffusive convection in many large-scale applications including oceanography, metallurgy, geology, etc. The model has also been developed for three-dimensional geometry, but the results were quite similar to that of 2-D simulations.Keywords: double diffusive layers, natural convection, Rayleigh number, thermal gradients, compositional gradients
Procedia PDF Downloads 881071 The Concept of Accounting in Islamic Transactions
Authors: Ahmad Abdulkadir Ibrahim
Abstract:
The Islamic law of transactions laid down the methods and instruments of accounting and analyzed its basic assumptions in the modern world. There is a need to examine the implications of accounting initiatives in the Muslim world and attempt to outline the important characteristics of Islamic accounting and how Islamic accounting resolves the problem of measuring the cost of Murabaha goods in case of exchange rate variation. The research tends to discuss an analytical approach to the Islamic accounting concept as well as elaborating the jurisprudential matter and practical aspects of accounting in Islamic financial transactions. It also aims to alert the practitioners of accounting in the Islamic world to be aware of the concept of accounting in Islamic jurisprudence and its historical development. The methodology adopted in this research is the qualitative method through the consultation of relevant literature, which focuses on the thematic study of the subject matter. This is followed by an analysis and discussion of the contents of the materials used. It is concluded that Islamic accounting is unique in its norms as it has been characterized by fairness, accuracy in measuring tools, truthfulness, mutual trust, moderation in making a profit, and tolerance. It was also qualified by capacity and flexibility in terms of the tools and terminology used and invented by Islamic jurisprudence in the accounting system, which indicates its validity and consistency anytime and anywhere. An important conclusion of the research also lies in the refutation of the popular idea that an Italian writer known as Luca Pacilio was the first writer who developed the basis of double-entry due to the presented proofs by Muslim scholars of critical accounting developments, which cannot be ignored. It concludes further that Islamic jurisprudence draws the accounting system codified in the foundations of a market that is far from usury, fraud, cheating, and unfair competition in all areas.Keywords: accounting, Islamic accounting, Islamic transactions, Islamic jurisprudence, double entry, murabaha, characteristics
Procedia PDF Downloads 67