Search results for: dimensional accuracy
3847 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley
Authors: Sajana Suwal, Ganesh R. Nhemafuki
Abstract:
Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response
Procedia PDF Downloads 2973846 BERT-Based Chinese Coreference Resolution
Authors: Li Xiaoge, Wang Chaodong
Abstract:
We introduce the first Chinese Coreference Resolution Model based on BERT (CCRM-BERT) and show that it significantly outperforms all previous work. The key idea is to consider the features of the mention, such as part of speech, width of spans, distance between spans, etc. And the influence of each features on the model is analyzed. The model computes mention embeddings that combine BERT with features. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the Chinese OntoNotes benchmark.Keywords: BERT, coreference resolution, deep learning, nature language processing
Procedia PDF Downloads 2223845 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration
Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger
Abstract:
Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration
Procedia PDF Downloads 563844 Establishing Multi-Leveled Computability as a Living-System Evolutionary Context
Authors: Ron Cottam, Nils Langloh, Willy Ranson, Roger Vounckx
Abstract:
We start by formally describing the requirements for environmental-reaction survival computation in a natural temporally-demanding medium, and develop this into a more general model of the evolutionary context as a computational machine. The effect of this development is to replace deterministic logic by a modified form which exhibits a continuous range of dimensional fractal diffuseness between the isolation of perfectly ordered localization and the extended communication associated with nonlocality as represented by pure causal chaos. We investigate the appearance of life and consciousness in the derived general model, and propose a representation of Nature within which all localizations have the character of quasi-quantal entities. We compare our conclusions with Heisenberg’s uncertainty principle and nonlocal teleportation, and maintain that computability is the principal influence on evolution in the model we propose.Keywords: computability, evolution, life, localization, modeling, nonlocality
Procedia PDF Downloads 4003843 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows
Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham
Abstract:
In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis
Procedia PDF Downloads 713842 Experimental Study of the Antibacterial Activity and Modeling of Non-isothermal Crystallization Kinetics of Sintered Seashell Reinforced Poly(Lactic Acid) And Poly(Butylene Succinate) Biocomposites Planned for 3D Printing
Authors: Mohammed S. Razali, Kamel Khimeche, Dahah Hichem, Ammar Boudjellal, Djamel E. Kaderi, Nourddine Ramdani
Abstract:
The use of additive manufacturing technologies has revolutionized various aspects of our daily lives. In particular, 3D printing has greatly advanced biomedical applications. While fused filament fabrication (FFF) technologies have made it easy to produce or prototype various medical devices, it is crucial to minimize the risk of contamination. New materials with antibacterial properties, such as those containing compounded silver nanoparticles, have emerged on the market. In a previous study, we prepared a newly sintered seashell filler (SSh) from bio-based seashells found along the Mediterranean coast using a suitable heat treatment process. We then prepared a series of polylactic acid (PLA) and polybutylene succinate (PBS) biocomposites filled with these SSh particles using a melt mixing technique with a twin-screw extruder to use them as feedstock filaments for 3D printing. The study consisted of two parts: evaluating the antibacterial activity of newly prepared biocomposites made of PLA and PBS reinforced with a sintered seashell in the first part and experimental and modeling analysis of the non-isothermal crystallization kinetics of these biocomposites in the second part. In the first part, the bactericidal activity of the biocomposites against three different bacteria, including Gram-negative bacteria such as (E. coli and Pseudomonas aeruginosa), as well as Gram-positive bacteria such as (Staphylococcus aureus), was examined. The PLA-based biocomposite containing 20 wt.% of SSh particles exhibited an inhibition zone with radial diameters of 8mm and 6mm against E. coli and Pseudo. Au, respectively, while no bacterial activity was observed against Staphylococcus aureus. In the second part, the focus was on investigating the effect of the sintered seashell filler particles on the non-isothermal crystallization kinetics of PLA and PBS 3D-printing composite materials. The objective was to understand the impact of the filler particles on the crystallization mechanism of both PLA and PBS during the cooling process of a melt-extruded filament in (FFF) to manage the dimensional accuracy and mechanical properties of the final printed part. We conducted a non-isothermal melt crystallization kinetic study of a series of PLA-SS and PBS-SS composites using differential scanning calorimetry at various cooling rates. We analyzed the obtained kinetic data using different crystallization kinetic models such as modified Avrami, Ozawa, and Mo's methods. Dynamic mode describes the relative crystallinity as a function of temperature; it found that time half crystallinity (t1/2) of neat PLA decreased from 17 min to 7.3 min for PLA+5 SSh and the (t1/2) of virgin PBS was reduced from 3.5 min to 2.8 min for the composite containing 5wt.% of SSh. We found that the coated SS particles with stearic acid acted as nucleating agents and had a nucleation activity, as observed through polarized optical microscopy. Moreover, we evaluated the effective energy barrier of the non-isothermal crystallization process using the Iso conversional methods of Flynn-Wall-Ozawa (F-W-O) and Kissinger-Akahira-Sunose (K-A-S). The study provides significant insights into the crystallization behavior of PLA and PBS biocomposites.Keywords: avrami model, bio-based reinforcement, dsc, gram-negative bacteria, gram-positive bacteria, isoconversional methods, non-isothermal crystallization kinetics, poly(butylene succinate), poly(lactic acid), antbactirial activity
Procedia PDF Downloads 873841 A Source Point Distribution Scheme for Wave-Body Interaction Problem
Authors: Aichun Feng, Zhi-Min Chen, Jing Tang Xing
Abstract:
A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.Keywords: source point distribution, panel method, Rankine source, desingularized algorithm
Procedia PDF Downloads 3663840 Discharge Estimation in a Two Flow Braided Channel Based on Energy Concept
Authors: Amiya Kumar Pati, Spandan Sahu, Kishanjit Kumar Khatua
Abstract:
River is our main source of water which is a form of open channel flow and the flow in the open channel provides with many complex phenomena of sciences that needs to be tackled such as the critical flow conditions, boundary shear stress, and depth-averaged velocity. The development of society, more or less solely depends upon the flow of rivers. The rivers are major sources of many sediments and specific ingredients which are much essential for human beings. A river flow consisting of small and shallow channels sometimes divide and recombine numerous times because of the slow water flow or the built up sediments. The pattern formed during this process resembles the strands of a braid. Braided streams form where the sediment load is so heavy that some of the sediments are deposited as shifting islands. Braided rivers often exist near the mountainous regions and typically carry coarse-grained and heterogeneous sediments down a fairly steep gradient. In this paper, the apparent shear stress formulae were suitably modified, and the Energy Concept Method (ECM) was applied for the prediction of discharges at the junction of a two-flow braided compound channel. The Energy Concept Method has not been applied for estimating the discharges in the braided channels. The energy loss in the channels is analyzed based on mechanical analysis. The cross-section of channel is divided into two sub-areas, namely the main-channel below the bank-full level and region above the bank-full level for estimating the total discharge. The experimental data are compared with a wide range of theoretical data available in the published literature to verify this model. The accuracy of this approach is also compared with Divided Channel Method (DCM). From error analysis of this method, it is observed that the relative error is less for the data-sets having smooth floodplains when compared to rough floodplains. Comparisons with other models indicate that the present method has reasonable accuracy for engineering purposes.Keywords: critical flow, energy concept, open channel flow, sediment, two-flow braided compound channel
Procedia PDF Downloads 1293839 Evolution of Cord Absorbed Dose during Larynx Cancer Radiotherapy, with 3D Treatment Planning and Tissue Equivalent Phantom
Authors: Mohammad Hassan Heidari, Amir Hossein Goodarzi, Majid Azarniush
Abstract:
Radiation doses to tissues and organs were measured using the anthropomorphic phantom as an equivalent to the human body. When high-energy X-rays are externally applied to treat laryngeal cancer, the absorbed dose at the laryngeal lumen is lower than given dose because of air space which it should pass through before reaching the lesion. Specially in case of high-energy X-rays, the loss of dose is considerable. Three-dimensional absorbed dose distributions have been computed for high-energy photon radiation therapy of laryngeal and hypo pharyngeal cancers, using a coaxial pair of opposing lateral beams in fixed positions. Treatment plans obtained under various conditions of irradiation.Keywords: 3D treatment planning, anthropomorphic phantom, larynx cancer, radiotherapy
Procedia PDF Downloads 5513838 Orthogonal Basis Extreme Learning Algorithm and Function Approximation
Abstract:
A new algorithm for single hidden layer feedforward neural networks (SLFN), Orthogonal Basis Extreme Learning (OBEL) algorithm, is proposed and the algorithm derivation is given in the paper. The algorithm can decide both the NNs parameters and the neuron number of hidden layer(s) during training while providing extreme fast learning speed. It will provide a practical way to develop NNs. The simulation results of function approximation showed that the algorithm is effective and feasible with good accuracy and adaptability.Keywords: neural network, orthogonal basis extreme learning, function approximation
Procedia PDF Downloads 5403837 Measurement of VIP Edge Conduction Using Vacuum Guarded Hot Plate
Authors: Bongsu Choi, Tae-Ho Song
Abstract:
Vacuum insulation panel (VIP) is a promising thermal insulator for buildings, refrigerator, LNG carrier and so on. In general, it has the thermal conductivity of 2~4 mW/m•K. However, this thermal conductivity is that measured at the center of VIP. The total effective thermal conductivity of VIP is larger than this value due to the edge conduction through the envelope. In this paper, the edge conduction of VIP is examined theoretically, numerically and experimentally. To confirm the existence of the edge conduction, numerical analysis is performed for simple two-dimensional VIP model and a theoretical model is proposed to calculate the edge conductivity. Also, the edge conductivity is measured using the vacuum guarded hot plate and the experiment is validated against numerical analysis. The results show that the edge conductivity is dependent on the width of panel and thickness of Al-foil. To reduce the edge conduction, it is recommended that the VIP should be made as big as possible or made of thin Al film envelope.Keywords: envelope, edge conduction, thermal conductivity, vacuum insulation panel
Procedia PDF Downloads 4083836 ANAC-id - Facial Recognition to Detect Fraud
Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira
Abstract:
This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision
Procedia PDF Downloads 1623835 Solution Growth of Titanium Nitride Nanowires for Implantation Application
Authors: Roaa Sait, Richard Cross
Abstract:
The synthesis and characterization of one dimensional nanostructure such as nanowires has received considerable attention. Much effort has concentrated on TiN material especially in the biological field due to its useful and unique properties in this field. Therefore, for the purpose of this project, synthesis of Titanium Nitride (TiN) nanowires (NWs) will be presented. They will be synthesised by growing titanium dioxide (Ti) NWs in an aqueous solution at low temperatures under atmospheric pressure. Then the grown nanowires will undergo a 'Nitrodation process' in which results in the formation of TiN NWs. The structure, morphology and composition of the grown nanowires will be characterized using Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), X-ray Diffraction (XRD) and Cyclic Voltammetry (CV). Obtaining TiN NWs is a challenging task since it has not been formulated before, as far as we acknowledge. This might be due to the fact that nitriding Ti NWs can be difficult in terms of optimizing experimental parameters.Keywords: nanowires, dissolution-growth, nucleation, PECVD, deposition, spin coating, scanning electron microscopic analysis, cyclic voltammetry analysis
Procedia PDF Downloads 3633834 Development a Forecasting System and Reliable Sensors for River Bed Degradation and Bridge Pier Scouring
Authors: Fong-Zuo Lee, Jihn-Sung Lai, Yung-Bin Lin, Xiaoqin Liu, Kuo-Chun Chang, Zhi-Xian Yang, Wen-Dar Guo, Jian-Hao Hong
Abstract:
In recent years, climate change is a major factor to increase rainfall intensity and extreme rainfall frequency. The increased rainfall intensity and extreme rainfall frequency will increase the probability of flash flood with abundant sediment transport in a river basin. The floods caused by heavy rainfall may cause damages to the bridge, embankment, hydraulic works, and the other disasters. Therefore, the foundation scouring of bridge pier, embankment and spur dike caused by floods has been a severe problem in the worldwide. This severe problem has happened in many East Asian countries such as Taiwan and Japan because of these areas are suffered in typhoons, earthquakes, and flood events every year. Results from the complex interaction between fluid flow patterns caused by hydraulic works and the sediment transportation leading to the formation of river morphology, it is extremely difficult to develop a reliable and durable sensor to measure river bed degradation and bridge pier scouring. Therefore, an innovative scour monitoring sensor using vibration-based Micro-Electro Mechanical Systems (MEMS) was developed. This vibration-based MEMS sensor was packaged inside a stainless sphere with the proper protection of the full-filled resin, which can measure free vibration signals to detect scouring/deposition processes at the bridge pier. In addition, a friendly operational system includes rainfall runoff model, one-dimensional and two-dimensional numerical model, and the applicability of sediment transport equation and local scour formulas of bridge pier are included in this research. The friendly operational system carries out the simulation results of flood events that includes the elevation changes of river bed erosion near the specified bridge pier and the erosion depth around bridge piers. In addition, the system is developed with easy operation and integrated interface, the system can supplies users to calibrate and verify numerical model and display simulation results through the interface comparing to the scour monitoring sensors. To achieve the forecast of the erosion depth of river bed and main bridge pier in the study area, the system also connects the rainfall forecast data from Taiwan Typhoon and Flood Research Institute. The results can be provided available information for the management unit of river and bridge engineering in advance.Keywords: flash flood, river bed degradation, bridge pier scouring, a friendly operational system
Procedia PDF Downloads 1943833 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.Keywords: power spectral density, 3D EEG model, brain balancing, kNN
Procedia PDF Downloads 4903832 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network
Authors: Ahmad Alwosheel, Ahmed Alqaraawi
Abstract:
This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation
Procedia PDF Downloads 5093831 Gimbal Structure for the Design of 3D Flywheel System
Authors: Cheng-En Tsai, Chung-Chun Hsiao, Fu-Yuan Chang, Liang-Lun Lan, Jia-Ying Tu
Abstract:
New design of three dimensional (3D) flywheel system based on gimbal and gyro mechanics is proposed. The 3D flywheel device utilizes the rotational motion of three spherical shells and the conservation of angular momentum to achieve planar locomotion. Actuators mounted to the ring-shape frames are installed within the system to drive the spherical shells to rotate, for the purpose of steering and stabilization. Similar to the design of 2D flywheel system, it is expected that the spherical shells may function like a “flyball” to store and supply mechanical energy; additionally, in comparison with typical single-wheel and spherical robots, the 3D flywheel can be used for developing omnidirectional robotic systems with better mobility. The Lagrangian method is applied to derive the equation of motion of the 3D flywheel system, and simulation studies are presented to verify the proposed design.Keywords: Gimbal, spherical robot, gyroscope, Lagrangian formulation, flyball
Procedia PDF Downloads 6313830 Unsupervised Learning with Self-Organizing Maps for Named Entity Recognition in the CONLL2003 Dataset
Authors: Assel Jaxylykova, Alexnder Pak
Abstract:
This study utilized a Self-Organizing Map (SOM) for unsupervised learning on the CONLL-2003 dataset for Named Entity Recognition (NER). The process involved encoding words into 300-dimensional vectors using FastText. These vectors were input into a SOM grid, where training adjusted node weights to minimize distances. The SOM provided a topological representation for identifying and clustering named entities, demonstrating its efficacy without labeled examples. Results showed an F1-measure of 0.86, highlighting SOM's viability. Although some methods achieve higher F1 measures, SOM eliminates the need for labeled data, offering a scalable and efficient alternative. The SOM's ability to uncover hidden patterns provides insights that could enhance existing supervised methods. Further investigation into potential limitations and optimization strategies is suggested to maximize benefits.Keywords: named entity recognition, natural language processing, self-organizing map, CONLL-2003, semantics
Procedia PDF Downloads 543829 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 753828 Global Analysis in a Growth Economic Model with Perfect-Substitution Technologies
Authors: Paolo Russu
Abstract:
The purpose of the present paper is to highlight some features of an economic growth model with environmental negative externalities, giving rise to a three-dimensional dynamic system. In particular, we show that the economy, which is based on a Perfect-Substitution Technologies function of production, has no neither indeterminacy nor poverty trap. This implies that equilibrium select by economy depends on the history (initial values of state variable) of the economy rather than on expectations of economies agents. Moreover, by contrast, we prove that the basin of attraction of locally equilibrium points may be very large, as they can extend up to the boundary of the system phase space. The infinite-horizon optimal control problem has the purpose of maximizing the representative agent’s instantaneous utility function depending on leisure and consumption.Keywords: Hopf bifurcation, open-access natural resources, optimal control, perfect-substitution technologies, Poincarè compactification
Procedia PDF Downloads 1753827 Optimal Relaxation Parameters for Obtaining Efficient Iterative Methods for the Solution of Electromagnetic Scattering Problems
Authors: Nadaniela Egidi, Pierluigi Maponi
Abstract:
The approximate solution of a time-harmonic electromagnetic scattering problem for inhomogeneous media is required in several application contexts, and its two-dimensional formulation is a Fredholm integral equation of the second kind. This integral equation provides a formulation for the direct scattering problem, but it has to be solved several times also in the numerical solution of the corresponding inverse scattering problem. The discretization of this Fredholm equation produces large and dense linear systems that are usually solved by iterative methods. In order to improve the efficiency of these iterative methods, we use the Symmetric SOR preconditioning, and we propose an algorithm for the evaluation of the associated relaxation parameter. We show the efficiency of the proposed algorithm by several numerical experiments, where we use two Krylov subspace methods, i.e., Bi-CGSTAB and GMRES.Keywords: Fredholm integral equation, iterative method, preconditioning, scattering problem
Procedia PDF Downloads 1083826 A Corpus-Based Discourse Analysis of the Disappearance of MH370 in Malaysia and United Kingdom Newspapers: A Pilot Study
Authors: Theng Theng Ong
Abstract:
This pilot study adopts a corpus-based discourse analysis to explore the construction of Malaysia airline tragedy MH370 in the selected Malaysian and United Kingdom (UK) newspapers. Fairclough’s three-dimensional model is adopted in the study to support the corpus-based analysis. The analysis aims to determine the ways in which Malaysian Airline tragedy MH370 is linguistically defined and constructed in terms of keywords and collocation. The study also seeks to identify the types of discourse that are presented in the news articles. In addition, the differences or similarities in terms of keywords, topics or issues covered by the selected Malaysian and UK news media are examined.Keywords: corpus, CDA, newspapers, airline tragedies
Procedia PDF Downloads 3033825 Calibration of a Large Standard Step Height with Low Sampled Coherence Scanning Interferometry
Authors: Dahi Ghareab Abdelsalam Ibrahim
Abstract:
Scanning interferometry is commonly used for measuring the three-dimensional profiling of surfaces. Here, we used a scanning stage calibrated with standard gauge blocks to measure a standard step height of 200μm. The stage measures precisely the envelope of interference at the platen and at the surface of the step height. From the difference between the two envelopes, we measured the step height of the sample. Experimental measurements show that the measured value matches well with the nominal value of the step height. A light beam of 532nm from a Tungsten Lamp is collimated and incident on the interferometer. By scanning, two envelopes were produced. The envelope at the platen surface and the envelope at the object surface were determined precisely by a written program code, and then the difference between them was measured from the calibrated scanning stage. The difference was estimated to be in the range of 198 ± 2 μm.Keywords: optical metrology, digital holography, interferometry, phase unwrapping
Procedia PDF Downloads 793824 Complex Dynamics of a Four Species Food-Web Model: An Analysis through Beddington-Deangelis Functional Response in the Presence of Additional Food
Authors: Surbhi Rani, Sunita Gakkhar
Abstract:
The four-dimensional food web system consisting of two prey species for a generalist middle predator and a top predator is proposed and investigated. The middle predator is predating both the prey species with a modified Holling type-II functional response. The food web model is found to be well-posed, bounded, and dissipative. The proposed model's essential dynamical features are studied in terms of local stability. The four species' survival is explored, and persistence conditions are established. The numerical simulations reveal the persistence in the form of a chaotic attractor or stable focus. The conclusion is that providing additional food to the middle predator may help to control the food chain's chaos.Keywords: predator-prey model, existence of equilibrium points, local stability, chaos, numerical simulations
Procedia PDF Downloads 1143823 Feature Extraction Technique for Prediction the Antigenic Variants of the Influenza Virus
Authors: Majid Forghani, Michael Khachay
Abstract:
In genetics, the impact of neighboring amino acids on a target site is referred as the nearest-neighbor effect or simply neighbor effect. In this paper, a new method called wavelet particle decomposition representing the one-dimensional neighbor effect using wavelet packet decomposition is proposed. The main idea lies in known dependence of wavelet packet sub-bands on location and order of neighboring samples. The method decomposes the value of a signal sample into small values called particles that represent a part of the neighbor effect information. The results have shown that the information obtained from the particle decomposition can be used to create better model variables or features. As an example, the approach has been applied to improve the correlation of test and reference sequence distance with titer in the hemagglutination inhibition assay.Keywords: antigenic variants, neighbor effect, wavelet packet, wavelet particle decomposition
Procedia PDF Downloads 1623822 Quantum Algebra from Generalized Q-Algebra
Authors: Muna Tabuni
Abstract:
The paper contains an investigation of the notion of Q algebras. A brief introduction to quantum mechanics is given, in that systems the state defined by a vector in a complex vector space H which have Hermitian inner product property. H may be finite or infinite-dimensional. In quantum mechanics, operators must be hermitian. These facts are saved by Lie algebra operators but not by those of quantum algebras. A Hilbert space H consists of a set of vectors and a set of scalars. Lie group is a differentiable topological space with group laws given by differentiable maps. A Lie algebra has been introduced. Q-algebra has been defined. A brief introduction to BCI-algebra is given. A BCI sub algebra is introduced. A brief introduction to BCK=BCH-algebra is given. Every BCI-algebra is a BCH-algebra. Homomorphism maps meanings are introduced. Homomorphism maps between two BCK algebras are defined. The mathematical formulations of quantum mechanics can be expressed using the theory of unitary group representations. A generalization of Q algebras has been introduced, and their properties have been considered. The Q- quantum algebra has been studied, and various examples have been given.Keywords: Q-algebras, BCI, BCK, BCH-algebra, quantum mechanics
Procedia PDF Downloads 2053821 Analysis of Expression Data Using Unsupervised Techniques
Authors: M. A. I Perera, C. R. Wijesinghe, A. R. Weerasinghe
Abstract:
his study was conducted to review and identify the unsupervised techniques that can be employed to analyze gene expression data in order to identify better subtypes of tumors. Identifying subtypes of cancer help in improving the efficacy and reducing the toxicity of the treatments by identifying clues to find target therapeutics. Process of gene expression data analysis described under three steps as preprocessing, clustering, and cluster validation. Feature selection is important since the genomic data are high dimensional with a large number of features compared to samples. Hierarchical clustering and K Means are often used in the analysis of gene expression data. There are several cluster validation techniques used in validating the clusters. Heatmaps are an effective external validation method that allows comparing the identified classes with clinical variables and visual analysis of the classes.Keywords: cancer subtypes, gene expression data analysis, clustering, cluster validation
Procedia PDF Downloads 1523820 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise
Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim
Abstract:
Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.Keywords: leisure, dimensional model, activity, relationship, expertise
Procedia PDF Downloads 3133819 Adhesion of Sputtered Copper Thin Films Deposited on Flexible Substrates
Authors: Rwei-Ching Chang, Bo-Yu Su
Abstract:
Adhesion of copper thin films deposited on polyethylene terephthAdhesion of copper thin films deposited on polyethylene terephthalate substrate by direct current sputtering with different sputtering parameters is discussed in this work. The effects of plasma treatment with 0, 5, and 10 minutes on the thin film properties are investigated first. Various argon flow rates at 40, 50, 60 standard cubic centimeters per minute (sccm), deposition power at 30, 40, 50 W, and film thickness at 100, 200, 300 nm are also discussed. The 3-dimensional surface profilometer, micro scratch machine, and optical microscope are used to characterize the thin film properties. The results show that the increase of the plasma treatment time on the polyethylene terephthalate surface affects the roughness and critical load of the films. The critical load increases as the plasma treatment time increases. When the plasma treatment time was adjusted from 5 minutes to 10 minutes, the adhesion increased from 8.20 mN to 13.67 mN. When the argon flow rate is decreased from 60 sccm to 40 sccm, the adhesion increases from 8.27 mN to 13.67 mN. The adhesion is also increased by the condition of higher power, where the adhesion increased from 13.67 mN to 25.07 mN as the power increases from 30 W to 50 W. The adhesion of the film increases from 13.67 mN to 21.41mN as the film thickness increases from 100 nm to 300 nm. Comparing all the deposition parameters, it indicates the change of the power and thickness has much improvement on the film adhesion.alate substrate by direct current sputtering with different sputtering parameters is discussed in this work. The effects of plasma treatment with 0, 5, and 10 minutes on the thin film properties are investigated first. Various argon flow rates at 40, 50, 60 standard cubic centimeters per minute (sccm), deposition power at 30, 40, 50 W, and film thickness at 100, 200, 300 nm are also discussed. The 3-dimensional surface profilometer, micro scratch machine, and optical microscope are used to characterize the thin film properties. The results show that the increase of the plasma treatment time on the polyethylene terephthalate surface affects the roughness and critical load of the films. The critical load increases as the plasma treatment time increases. When the plasma treatment time was adjusted from 5 minutes to 10 minutes, the adhesion increased from 8.20 mN to 13.67 mN. When the argon flow rate is decreased from 60 sccm to 40 sccm, the adhesion increases from 8.27 mN to 13.67 mN. The adhesion is also increased by the condition of higher power, where the adhesion increased from 13.67 mN to 25.07 mN as the power increases from 30 W to 50 W. The adhesion of the film increases from 13.67 mN to 21.41mN as the film thickness increases from 100 nm to 300 nm. Comparing all the deposition parameters, it indicates the change of the power and thickness has much improvement on the film adhesion.Keywords: flexible substrate, sputtering, adhesion, copper thin film
Procedia PDF Downloads 1333818 3D Object Model Reconstruction Based on Polywogs Wavelet Network Parametrization
Authors: Mohamed Othmani, Yassine Khlifi
Abstract:
This paper presents a technique for compact three dimensional (3D) object model reconstruction using wavelet networks. It consists to transform an input surface vertices into signals,and uses wavelet network parameters for signal approximations. To prove this, we use a wavelet network architecture founded on several mother wavelet families. POLYnomials WindOwed with Gaussians (POLYWOG) wavelet families are used to maximize the probability to select the best wavelets which ensure the good generalization of the network. To achieve a better reconstruction, the network is trained several iterations to optimize the wavelet network parameters until the error criterion is small enough. Experimental results will shown that our proposed technique can effectively reconstruct an irregular 3D object models when using the optimized wavelet network parameters. We will prove that an accurateness reconstruction depends on the best choice of the mother wavelets.Keywords: 3d object, optimization, parametrization, polywog wavelets, reconstruction, wavelet networks
Procedia PDF Downloads 288