Search results for: computational accuracy
3592 Compensatory Neuro-Fuzzy Inference (CNFI) Controller for Bilateral Teleoperation
Abstract:
This paper presents a new adaptive neuro-fuzzy controller equipped with compensatory fuzzy control (CNFI) in order to not only adjusts membership functions but also to optimize the adaptive reasoning by using a compensatory learning algorithm. The proposed control structure includes both CNFI controllers for which one is used to control in force the master robot and the second one for controlling in position the slave robot. The experimental results obtained, show a fairly high accuracy in terms of position and force tracking under free space motion and hard contact motion, what highlights the effectiveness of the proposed controllers.Keywords: compensatory fuzzy, neuro-fuzzy, control adaptive, teleoperation
Procedia PDF Downloads 3253591 Design Aspects for Developing a Microfluidics Diagnostics Device Used for Low-Cost Water Quality Monitoring
Authors: Wenyu Guo, Malachy O’Rourke, Mark Bowkett, Michael Gilchrist
Abstract:
Many devices for real-time monitoring of surface water have been developed in the past few years to provide early warning of pollutions and so to decrease the risk of environmental pollution efficiently. One of the most common methodologies used in the detection system is a colorimetric process, in which a container with fixed volume is filled with target ions and reagents to combine a colorimetric dye. The colorimetric ions can sensitively absorb a specific-wavelength radiation beam, and its absorbance rate is proportional to the concentration of the fully developed product, indicating the concentration of target nutrients in the pre-mixed water samples. In order to achieve precise and rapid detection effect, channels with dimensions in the order of micrometers, i.e., microfluidic systems have been developed and introduced into these diagnostics studies. Microfluidics technology largely reduces the surface to volume ratios and decrease the samples/reagents consumption significantly. However, species transport in such miniaturized channels is limited by the low Reynolds numbers in the regimes. Thus, the flow is extremely laminar state, and diffusion is the dominant mass transport process all over the regimes of the microfluidic channels. The objective of this present work has been to analyse the mixing effect and chemistry kinetics in a stop-flow microfluidic device measuring Nitride concentrations in fresh water samples. In order to improve the temporal resolution of the Nitride microfluidic sensor, we have used computational fluid dynamics to investigate the influence that the effectiveness of the mixing process between the sample and reagent within a microfluidic device exerts on the time to completion of the resulting chemical reaction. This computational approach has been complemented by physical experiments. The kinetics of the Griess reaction involving the conversion of sulphanilic acid to a diazonium salt by reaction with nitrite in acidic solution is set in the Laminar Finite-rate chemical reaction in the model. Initially, a methodology was developed to assess the degree of mixing of the sample and reagent within the device. This enabled different designs of the mixing channel to be compared, such as straight, square wave and serpentine geometries. Thereafter, the time to completion of the Griess reaction within a straight mixing channel device was modeled and the reaction time validated with experimental data. Further simulations have been done to compare the reaction time to effective mixing within straight, square wave and serpentine geometries. Results show that square wave channels can significantly improve the mixing effect and provides a low standard deviations of the concentrations of nitride and reagent, while for straight channel microfluidic patterns the corresponding values are 2-3 orders of magnitude greater, and consequently are less efficiently mixed. This has allowed us to design novel channel patterns of micro-mixers with more effective mixing that can be used to detect and monitor levels of nutrients present in water samples, in particular, Nitride. Future generations of water quality monitoring and diagnostic devices will easily exploit this technology.Keywords: nitride detection, computational fluid dynamics, chemical kinetics, mixing effect
Procedia PDF Downloads 2033590 Real-time Rate and Rhythms Feedback Control System in Patients with Atrial Fibrillation
Authors: Mohammad A. Obeidat, Ayman M. Mansour
Abstract:
Capturing the dynamic behavior of the heart to improve control performance, enhance robustness, and support diagnosis is very important in establishing real time models for the heart. Control Techniques and strategies have been utilized to improve system costs, reliability, and estimation accuracy for different types of systems such as biomedical, industrial, and other systems that required tuning input/output relation and/or monitoring. Simulations are performed to illustrate potential applications of the technology. In this research, a new control technology scheme is used to enhance the performance of the Af system and meet the design specifications.Keywords: atrial fibrillation, dynamic behavior, closed loop, signal, filter
Procedia PDF Downloads 4213589 Computational Simulations and Assessment of the Application of Non-Circular TAVI Devices
Authors: Jonathon Bailey, Neil Bressloff, Nick Curzen
Abstract:
Transcatheter Aortic Valve Implantation (TAVI) devices are stent-like frames with prosthetic leaflets on the inside, which are percutaneously implanted. The device in a crimped state is fed through the arteries to the aortic root, where the device frame is opened through either self-expansion or balloon expansion, which reveals the prosthetic valve within. The frequency at which TAVI is being used to treat aortic stenosis is rapidly increasing. In time, TAVI is likely to become the favoured treatment over Surgical Valve Replacement (SVR). Mortality after TAVI has been associated with severe Paravalvular Aortic Regurgitation (PAR). PAR occurs when the frame of the TAVI device does not make an effective seal against the internal surface of the aortic root, allowing blood to flow backwards about the valve. PAR is common in patients and has been reported to some degree in as much as 76% of cases. Severe PAR (grade 3 or 4) has been reported in approximately 17% of TAVI patients resulting in post-procedural mortality increases from 6.7% to 16.5%. TAVI devices, like SVR devices, are circular in cross-section as the aortic root is often considered to be approximately circular in shape. In reality, however, the aortic root is often non-circular. The ascending aorta, aortic sino tubular junction, aortic annulus and left ventricular outflow tract have an average ellipticity ratio of 1.07, 1.09, 1.29, and 1.49 respectively. An elliptical aortic root does not severely affect SVR, as the leaflets are completely removed during the surgical procedure. However, an elliptical aortic root can inhibit the ability of the circular Balloon-Expandable (BE) TAVI devices to conform to the interior of the aortic root wall, which increases the risk of PAR. Self-Expanding (SE) TAVI devices are considered better at conforming to elliptical aortic roots, however the valve leaflets were not designed for elliptical function, furthermore the incidence of PAR is greater in SE devices than BE devices (19.8% vs. 12.2% respectively). If a patient’s aortic root is too severely elliptical, they will not be suitable for TAVI, narrowing the treatment options to SVR. It therefore follows that in order to increase the population who can undergo TAVI, and reduce the risk associated with TAVI, non-circular devices should be developed. Computational simulations were employed to further advance our understanding of non-circular TAVI devices. Radial stiffness of the TAVI devices in multiple directions, frame bending stiffness and resistance to balloon induced expansion are all computationally simulated. Finally, a simulation has been developed that demonstrates the expansion of TAVI devices into a non-circular patient specific aortic root model in order to assess the alterations in deployment dynamics, PAR and the stresses induced in the aortic root.Keywords: tavi, tavr, fea, par, fem
Procedia PDF Downloads 4393588 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 1193587 Size Reduction of Images Using Constraint Optimization Approach for Machine Communications
Authors: Chee Sun Won
Abstract:
This paper presents the size reduction of images for machine-to-machine communications. Here, the salient image regions to be preserved include the image patches of the key-points such as corners and blobs. Based on a saliency image map from the key-points and their image patches, an axis-aligned grid-size optimization is proposed for the reduction of image size. To increase the size-reduction efficiency the aspect ratio constraint is relaxed in the constraint optimization framework. The proposed method yields higher matching accuracy after the size reduction than the conventional content-aware image size-reduction methods.Keywords: image compression, image matching, key-point detection and description, machine-to-machine communication
Procedia PDF Downloads 4193586 Numerical Study of a 6080HP Open Drip Proof (ODP) Motor
Authors: Feng-Hisang Lai
Abstract:
CFD(Computational Fluid Dynamics) is conducted to numerically study the flow and heat transfer features of a two-pole, 6,080HP, 60Hz, 3,150V open drip-proof (ODP) motor. The stator and rotor cores in this high voltage induction motor are segmented with the use of spacers for cooling purposes, which leads to difficulties in meshing when the entire system is to be simulated. The system is divided into 4 parts, meshed separately and then combined using interfaces. The deviation between the CFD and experimental results in temperature and flow rate is less than 10%. The internal flow is further examined and a final design is proposed to reduce the winding temperature by 10 degrees.Keywords: CFD, open drip proof, induction motor, cooling
Procedia PDF Downloads 1973585 In silico Model of Transamination Reaction Mechanism
Authors: Sang-Woo Han, Jong-Shik Shin
Abstract:
w-Transaminase (w-TA) is broadly used for synthesizing chiral amines with a high enantiopurity. However, the reaction mechanism of w-TA has been not well studied, contrary to a-transaminase (a-TA) such as AspTA. Here, we propose in silico model on the reaction mechanism of w-TA. Based on the modeling results which showed large free energy gaps between external aldimine and quinonoid on deamination (or ketimine and quinonoid on amination), withdrawal of Ca-H seemed as a critical step which determines the reaction rate on both amination and deamination reactions, which is consistent with previous researches. Hyperconjugation was also observed in both external aldimine and ketimine which weakens Ca-H bond to elevate Ca-H abstraction.Keywords: computational modeling, reaction intermediates, w-transaminase, in silico model
Procedia PDF Downloads 5473584 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength
Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong
Abstract:
This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification
Procedia PDF Downloads 2403583 [Keynote Talk]: The Intoxicated Eyewitness: Effect of Alcohol Consumption on Identification Accuracy in Lineup
Authors: Vikas S. Minchekar
Abstract:
The eyewitness is a crucial source of evidence in the criminal judicial system. However, rely on the reminiscence of an eyewitness especially intoxicated eyewitness is not always judicious. It might lead to some serious consequences. Day by day, alcohol-related crimes or the criminal incidences in bars, nightclubs, and restaurants are increasing rapidly. Tackling such cases is very complicated to any investigation officers. The people in that incidents are violated due to the alcohol consumption hence, their ability to identify the suspects or recall these phenomena is affected. The studies on the effects of alcohol consumption on motor activities such as driving and surgeries have received much attention. However, the effect of alcohol intoxication on memory has received little attention from the psychology, law, forensic and criminology scholars across the world. In the Indian context, the published articles on this issue are equal to none up to present day. This field experiment investigation aimed at to finding out the effect of alcohol consumption on identification accuracy in lineups. Forty adult, social drinkers, and twenty sober adults were randomly recruited for the study. The sober adults were assigned into 'placebo' beverage group while social drinkers were divided into two group e. g. 'low dose' of alcohol (0.2 g/kg) and 'high dose' of alcohol (0.8 g/kg). The social drinkers were divided in such a way that their level of blood-alcohol concentration (BAC) will become different. After administering the beverages for the placebo group and liquor to the social drinkers for 40 to 50 minutes of the period, the five-minute video clip of mock crime is shown to all in a group of four to five members. After the exposure of video, clip subjects were given 10 portraits and asked them to recognize whether they are involved in mock crime or not. Moreover, they were also asked to describe the incident. The subjects were given two opportunities to recognize the portraits and to describe the events; the first opportunity is given immediately after the video clip and the second was 24 hours later. The obtained data were analyzed by one-way ANOVA and Scheffe’s posthoc multiple comparison tests. The results indicated that the 'high dose' group is remarkably different from the 'placebo' and 'low dose' groups. But, the 'placebo' and 'low dose' groups are equally performed. The subjects in a 'high dose' group recognized only 20% faces correctly while the subjects in a 'placebo' and 'low dose' groups are recognized 90 %. This study implied that the intoxicated witnesses are less accurate to recognize the suspects and also less capable of describing the incidents where crime has taken place. Moreover, this study does not assert that intoxicated eyewitness is generally less trustworthy than their sober counterparts.Keywords: intoxicated eyewitness, memory, social drinkers, lineups
Procedia PDF Downloads 2683582 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1063581 Extraction of Urban Building Damage Using Spectral, Height and Corner Information
Authors: X. Wang
Abstract:
Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.Keywords: building damage, corner, earthquake, height, very high resolution (VHR)
Procedia PDF Downloads 2133580 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival
Procedia PDF Downloads 3413579 Sensitivity Analysis during the Optimization Process Using Genetic Algorithms
Authors: M. A. Rubio, A. Urquia
Abstract:
Genetic algorithms (GA) are applied to the solution of high-dimensional optimization problems. Additionally, sensitivity analysis (SA) is usually carried out to determine the effect on optimal solutions of changes in parameter values of the objective function. These two analyses (i.e., optimization and sensitivity analysis) are computationally intensive when applied to high-dimensional functions. The approach presented in this paper consists in performing the SA during the GA execution, by statistically analyzing the data obtained of running the GA. The advantage is that in this case SA does not involve making additional evaluations of the objective function and, consequently, this proposed approach requires less computational effort than conducting optimization and SA in two consecutive steps.Keywords: optimization, sensitivity, genetic algorithms, model calibration
Procedia PDF Downloads 4363578 A Metaheuristic for the Layout and Scheduling Problem in a Job Shop Environment
Authors: Hernández Eva Selene, Reyna Mary Carmen, Rivera Héctor, Barragán Irving
Abstract:
We propose an approach that jointly addresses the layout of a facility and the scheduling of a sequence of jobs. In real production, these two problems are interrelated. However, they are treated separately in the literature. Our approach is an extension of the job shop problem with transportation delay, where the location of the machines is selected among possible sites. The model minimizes the makespan, using the short processing times rule with two algorithms; the first one considers all the permutations for the location of machines, and the second only a heuristic to select some specific permutations that reduces computational time. Some instances are proved and compared with literature.Keywords: layout problem, job shop scheduling problem, concurrent scheduling and layout problem, metaheuristic
Procedia PDF Downloads 6103577 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules
Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel
Abstract:
The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling
Procedia PDF Downloads 1333576 Modeling False Statements in Texts
Authors: Francielle A. Vargas, Thiago A. S. Pardo
Abstract:
According to the standard philosophical definition, lying is saying something that you believe to be false with the intent to deceive. For deception detection, the FBI trains its agents in a technique named statement analysis, which attempts to detect deception based on parts of speech (i.e., linguistics style). This method is employed in interrogations, where the suspects are first asked to make a written statement. In this poster, we model false statements using linguistics style. In order to achieve this, we methodically analyze linguistic features in a corpus of fake news in the Portuguese language. The results show that they present substantial lexical, syntactic and semantic variations, as well as punctuation and emotion distinctions.Keywords: deception detection, linguistics style, computational linguistics, natural language processing
Procedia PDF Downloads 2183575 Predicting the Success of Bank Telemarketing Using Artificial Neural Network
Authors: Mokrane Selma
Abstract:
The shift towards decision making (DM) based on artificial intelligence (AI) techniques will change the way in which consumer markets and our societies function. Through AI, predictive analytics is being used by businesses to identify these patterns and major trends with the objective to improve the DM and influence future business outcomes. This paper proposes an Artificial Neural Network (ANN) approach to predict the success of telemarketing calls for selling bank long-term deposits. To validate the proposed model, we uses the bank marketing data of 41188 phone calls. The ANN attains 98.93% of accuracy which outperforms other conventional classifiers and confirms that it is credible and valuable approach for telemarketing campaign managers.Keywords: bank telemarketing, prediction, decision making, artificial intelligence, artificial neural network
Procedia PDF Downloads 1603574 Ophthalmic Ultrasound in the Diagnosis of Retinoblastoma
Authors: Abdulrahman Algaeed
Abstract:
The Ophthalmic Ultrasound is the easiest method of early diagnosing Retinoblastoma after clinical examination. It can be done with ease without sedation. King Khaled Eye Specialist Hospital is a tertiary care center where Retinoblastoma patients are often seen and treated there. The first modality to rule out the disease is Ophthalmic Ultrasound. Classic Retinoblastoma is easily diagnosed by using the conventional 10MHz Ophthalmic Ultrasound probe in the regular clinic setup. Retinal lesion with multiple, very highly reflective surfaces within lesion typical of Calcium deposits. The use of Standardized A-scan is very useful where internal reflectivity is classified as very highly reflective. Color Doppler is extremely useful as well to show the blood flow within lesion/s. In conclusion: Ophthalmic Ultrasound should be the first tool to be used to diagnose Retinoblastoma after clinical examination. The accuracy of the Exam is very high.Keywords: doppler, retinoblastoma, reflectivity, ultrasound
Procedia PDF Downloads 1133573 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 1033572 A Fast and Robust Protocol for Reconstruction and Re-Enactment of Historical Sites
Authors: Sanaa I. Abu Alasal, Madleen M. Esbeih, Eman R. Fayyad, Rami S. Gharaibeh, Mostafa Z. Ali, Ahmed A. Freewan, Monther M. Jamhawi
Abstract:
This research proposes a novel reconstruction protocol for restoring missing surfaces and low-quality edges and shapes in photos of artifacts at historical sites. The protocol starts with the extraction of a cloud of points. This extraction process is based on four subordinate algorithms, which differ in the robustness and amount of resultant. Moreover, they use different -but complementary- accuracy to some related features and to the way they build a quality mesh. The performance of our proposed protocol is compared with other state-of-the-art algorithms and toolkits. The statistical analysis shows that our algorithm significantly outperforms its rivals in the resultant quality of its object files used to reconstruct the desired model.Keywords: meshes, point clouds, surface reconstruction protocols, 3D reconstruction
Procedia PDF Downloads 4573571 Efficient Signcryption Scheme with Provable Security for Smart Card
Authors: Jayaprakash Kar, Daniyal M. Alghazzawi
Abstract:
The article proposes a novel construction of signcryption scheme with provable security which is most suited to implement on smart card. It is secure in random oracle model and the security relies on Decisional Bilinear Diffie-Hellmann Problem. The proposed scheme is secure against adaptive chosen ciphertext attack (indistiguishbility) and adaptive chosen message attack (unforgebility). Also, it is inspired by zero-knowledge proof. The two most important security goals for smart card are Confidentiality and authenticity. These functions are performed in one logical step in low computational cost.Keywords: random oracle, provable security, unforgebility, smart card
Procedia PDF Downloads 5933570 Cement Mortar Lining as a Potential Source of Water Contamination
Authors: M. Zielina, W. Dabrowski, E. Radziszewska-Zielina
Abstract:
Several different cements have been tested to evaluate their potential to leach calcium, chromium and aluminum ions in soft water environment. The research allows comparing some different cements in order to the potential risk of water contamination. This can be done only in the same environment. To reach the results in reasonable short time intervals and to make heavy metals measurements with high accuracy, demineralized water was used. In this case the conditions of experiments are far away from the water supply practice, but short time experiments and measurably high concentrations of elements in the water solution are an important advantage. Moreover leaching mechanisms can be recognized, our experiments reported here refer to this kind of cements evaluation.Keywords: concrete corrosion, hydrogen sulfide, odors, reinforced concrete sewers, sewerage
Procedia PDF Downloads 2093569 Concentrated Whey Protein Drink with Orange Flavor: Protein Modification and Formulation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The application of whey protein in drink industry to enhance the nutritional value of the products is important. Furthermore, the gelification of protein during thermal treatment and shelf life makes some limitations in its application. So, the main goal of this research is manufacturing of high concentrate whey protein orange drink with appropriate shelf life. In this way, whey protein was 5 to 30% hydrolyzed ( in 5 percent intervals at six stages), then thermal stability of samples with 10% concentration of protein was tested in acidic condition (T= 90 °C, pH=4.2, 5 minutes ) and neutral condition (T=120° C, pH:6.7, 20 minutes.) Furthermore, to study the shelf life of heat treated samples in 4 months at 4 and 24 °C, the time sweep rheological test were done. At neutral conditions, 5 to 20% hydrolyzed sample showed gelling during thermal treatment, whereas at acidic condition, was happened only in 5 to 10 percent hydrolyzed samples. This phenomenon could be related to the difference in hydrodynamic radius and zeta potential of samples with different level of hydrolyzation at acidic and neutral conditions. To study the gelification of heat resistant protein solutions during shelf life, for 4 months with 7 days intervals, the time sweep analysis were performed. Cross over was observed for all heat resistant neutral samples at both storage temperature, while in heat resistant acidic samples with degree of hydrolysis, 25 and 30 percentage at 4 and 20 °C, it was not seen. It could be concluded that the former sample was stable during heat treatment and 4 months storage, which made them a good choice for manufacturing high protein drinks. The Scheffe polynomial model and numerical optimization were employed for modeling and high protein orange drink formula optimization. Scheffe model significantly predicted the overal acceptance index (Pvalue<0.05) of sensorial analysis. The coefficient of determination (R2) of 0.94, the adjusted coefficient of determination (R2Adj) of 0.90, insignificance of the lack-of-fit test and F value of 64.21 showed the accuracy of the model. Moreover, the coefficient of variable (C.V) was 6.8% which suggested the replicability of the experimental data. The desirability function had been achieved to be 0.89, which indicates the high accuracy of optimization. The optimum formulation was found as following: Modified whey protein solution (65.30%), natural orange juice (33.50%), stevia sweetener (0.05%), orange peel oil (0.15%) and citric acid (1 %), respectively. Its worth mentioning that this study made an appropriate model for application of whey protein in drink industry without bitter flavor and gelification during heat treatment and shelf life.Keywords: croos over, orange beverage, protein modification, optimization
Procedia PDF Downloads 623568 A New Analytic Solution for the Heat Conduction with Time-Dependent Heat Transfer Coefficient
Authors: Te Wen Tu, Sen Yung Lee
Abstract:
An alternative approach is proposed to develop the analytic solution for one dimensional heat conduction with one mixed type boundary condition and general time-dependent heat transfer coefficient. In this study, the physic meaning of the solution procedure is revealed. It is shown that the shifting function takes the physic meaning of the reciprocal of Biot function in the initial time. Numerical results show the accuracy of this study. Comparing with those given in the existing literature, the difference is less than 0.3%.Keywords: analytic solution, heat transfer coefficient, shifting function method, time-dependent boundary condition
Procedia PDF Downloads 4323567 Deepfake Detection System through Collective Intelligence in Public Blockchain Environment
Authors: Mustafa Zemin
Abstract:
The increasing popularity of deepfake technology poses a growing threat to information integrity and security. This paper presents a deepfake detection system designed to leverage public blockchain and collective intelligence as solutions to address this issue. Utilizing smart contracts on the Ethereum blockchain ensures secure, decentralized media content verification, creating an auditable and tamper-resistant framework. The approach integrates concepts from electronic voting, allowing a network of participants to assess content authenticity collectively through consensus mechanisms. This decentralized, community-driven model enhances detection accuracy while preventing single points of failure. Experimental analysis demonstrates the system’s robustness, reliability, and scalability in deepfake detection, offering a sustainable approach to combat digital misinformation. The proposed solution advances deepfake detection capabilities and provides a framework for applying blockchain-based collective intelligence to other domains facing similar verification challenges, thereby contributing to the fight against digital misinformation in a secure, trustless environment. The limitations and challenges identified in this work can be addressed by enhancing user participation, particularly through more informed and conscious engagement. One potential avenue is to involve users in developing deep learning models, which could contribute to the voting system. However, for such participation to be incentivized, a reward mechanism must be implemented. A viable approach to this is through a credibility-based reward system, where users who actively participate in voting are compensated with tokens. This system would serve not only as a motivational factor but also as a mechanism for ensuring higher-quality participation over time. Each participant is assigned a Credibility Score, which is dynamically adjusted based on the accuracy of their votes. The credibility score increases when their decisions align with the majority consensus and decreases when their votes are incorrect. This incentivizes accurate decision-making and ensures that more reliable participants gain influence in the system. The credibility scores are designed to increase progressively for users with more correct votes. In contrast, penalties for incorrect voting are more severe than the rewards for correct decisions, emphasizing the importance of voting accuracy. As users’ Credibility Scores increase over time, successful voters will be less reliant on lower-scoring participants, thereby fostering an environment where high-quality contributions are valued. Furthermore, tokenization plays a critical role in enhancing the decentralization of the system. Users can participate without uploading videos, by receiving tokens through an airdrop mechanism once they surpass a predefined credibility threshold. This process effectively decentralizes decision-making and incentivizes participation from a broader user base. The integration of tokenization would allow users to interact with the smart contract in a more seamless manner, replacing the use of test tokens with the system’s own tokens. Voters with high credibility scores would be rewarded with tokens. The distribution model is designed to reflect the gradual increase in token value over time, similar to the evolution of Bitcoin's reward system, where early participants earn higher rewards, but as the system matures, the token value appreciates, and rewards decrease.Keywords: deepfake detection, public blockchain, electronic voting, collective intelligence, Ethereum
Procedia PDF Downloads 73566 Model Development for Real-Time Human Sitting Posture Detection Using a Camera
Authors: Jheanel E. Estrada, Larry A. Vea
Abstract:
This study developed model to detect proper/improper sitting posture using the built in web camera which detects the upper body points’ location and distances (chin, manubrium and acromion process). It also established relationships of human body frames and proper sitting posture. The models were developed by training some well-known classifiers such as KNN, SVM, MLP, and Decision Tree using the data collected from 60 students of different body frames. Decision Tree classifier demonstrated the most promising model performance with an accuracy of 95.35% and a kappa of 0.907 for head and shoulder posture. Results also showed that there were relationships between body frame and posture through Body Mass Index.Keywords: posture, spinal points, gyroscope, image processing, ergonomics
Procedia PDF Downloads 3293565 Virtual Screening and in Silico Toxicity Property Prediction of Compounds against Mycobacterium tuberculosis Lipoate Protein Ligase B (LipB)
Authors: Junie B. Billones, Maria Constancia O. Carrillo, Voltaire G. Organo, Stephani Joy Y. Macalino, Inno A. Emnacen, Jamie Bernadette A. Sy
Abstract:
The drug discovery and development process is generally known to be a very lengthy and labor-intensive process. Therefore, in order to be able to deliver prompt and effective responses to cure certain diseases, there is an urgent need to reduce the time and resources needed to design, develop, and optimize potential drugs. Computer-aided drug design (CADD) is able to alleviate this issue by applying computational power in order to streamline the whole drug discovery process, starting from target identification to lead optimization. This drug design approach can be predominantly applied to diseases that cause major public health concerns, such as tuberculosis. Hitherto, there has been no concrete cure for this disease, especially with the continuing emergence of drug resistant strains. In this study, CADD is employed for tuberculosis by first identifying a key enzyme in the mycobacterium’s metabolic pathway that would make a good drug target. One such potential target is the lipoate protein ligase B enzyme (LipB), which is a key enzyme in the M. tuberculosis metabolic pathway involved in the biosynthesis of the lipoic acid cofactor. Its expression is considerably up-regulated in patients with multi-drug resistant tuberculosis (MDR-TB) and it has no known back-up mechanism that can take over its function when inhibited, making it an extremely attractive target. Using cutting-edge computational methods, compounds from AnalytiCon Discovery Natural Derivatives database were screened and docked against the LipB enzyme in order to rank them based on their binding affinities. Compounds which have better binding affinities than LipB’s known inhibitor, decanoic acid, were subjected to in silico toxicity evaluation using the ADMET and TOPKAT protocols. Out of the 31,692 compounds in the database, 112 of these showed better binding energies than decanoic acid. Furthermore, 12 out of the 112 compounds showed highly promising ADMET and TOPKAT properties. Future studies involving in vitro or in vivo bioassays may be done to further confirm the therapeutic efficacy of these 12 compounds, which eventually may then lead to a novel class of anti-tuberculosis drugs.Keywords: pharmacophore, molecular docking, lipoate protein ligase B (LipB), ADMET, TOPKAT
Procedia PDF Downloads 4253564 Optimized Algorithm for Particle Swarm Optimization
Authors: Fuzhang Zhao
Abstract:
Particle swarm optimization (PSO) is becoming one of the most important swarm intelligent paradigms for solving global optimization problems. Although some progress has been made to improve PSO algorithms over the last two decades, additional work is still needed to balance parameters to achieve better numerical properties of accuracy, efficiency, and stability. In the optimal PSO algorithm, the optimal weightings of (√ 5 − 1)/2 and (3 − √5)/2 are used for the cognitive factor and the social factor, respectively. By the same token, the same optimal weightings have been applied for intensification searches and diversification searches, respectively. Perturbation and constriction effects are optimally balanced. Simulations of the de Jong, the Rosenbrock, and the Griewank functions show that the optimal PSO algorithm indeed achieves better numerical properties and outperforms the canonical PSO algorithm.Keywords: diversification search, intensification search, optimal weighting, particle swarm optimization
Procedia PDF Downloads 5833563 Bug Localization on Single-Line Bugs of Apache Commons Math Library
Authors: Cherry Oo, Hnin Min Oo
Abstract:
Software bug localization is one of the most costly tasks in program repair technique. Therefore, there is a high claim for automated bug localization techniques that can monitor programmers to the locations of bugs, with slight human arbitration. Spectrum-based bug localization aims to help software developers to discover bugs rapidly by investigating abstractions of the program traces to make a ranking list of most possible buggy modules. Using the Apache Commons Math library project, we study the diagnostic accuracy using our spectrum-based bug localization metric. Our outcomes show that the greater performance of a specific similarity coefficient, used to inspect the program spectra, is mostly effective on localizing of single line bugs.Keywords: software testing, bug localization, program spectra, bug
Procedia PDF Downloads 143