Search results for: Berlekamp-Massey algorithm
2013 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 1102012 Evaluating the Impact of Expansion on Urban Thermal Surroundings: A Case Study of Lahore Metropolitan City, Pakistan
Authors: Usman Ahmed Khan
Abstract:
Urbanization directly affects the existing infrastructure, landscape modification, environmental contamination, and traffic pollution, especially if there is a lack of urban planning. Recently, the rapid urban sprawl has resulted in less developed green areas and has devastating environmental consequences. This study was aimed to study the past urban expansion rates and measure LST from satellite data. The land use land cover (LULC) maps of years 1996, 2010, 2013, and 2017 were generated using landsat satellite images. Four main classes, i.e., water, urban, bare land, and vegetation, were identified using unsupervised classification with iterative self-organizing data analysis (isodata) technique. The LST from satellite thermal data can be derived from different procedures: atmospheric, radiometric calibrations and surface emissivity corrections, classification of spatial changeability in land-cover. Different methods and formulas were used in the algorithm that successfully retrieves the land surface temperature to help us study the thermal environment of the ground surface. To verify the algorithm, the land surface temperature and the near-air temperature were compared. The results showed that, From 1996-2017, urban areas increased to about a considerable increase of about 48%. Few areas of the city also shown in a reduction in LST from the year 1996-2017 that actually began their transitional phase from rural to urban LULC. The mean temperature of the city increased averagely about 1ºC each year in the month of October. The green and vegetative areas witnessed a decrease in the area while a higher number of pixels increased in urban class.Keywords: LST, LULC, isodata, urbanization
Procedia PDF Downloads 1012011 Remote Sensing and GIS-Based Environmental Monitoring by Extracting Land Surface Temperature of Abbottabad, Pakistan
Authors: Malik Abid Hussain Khokhar, Muhammad Adnan Tahir, Hisham Bin Hafeez Awan
Abstract:
Continuous environmental determinism and climatic change in the entire globe due to increasing land surface temperature (LST) has become a vital phenomenon nowadays. LST is accelerating because of increasing greenhouse gases in the environment which results of melting down ice caps, ice sheets and glaciers. It has not only worse effects on vegetation and water bodies of the region but has also severe impacts on monsoon areas in the form of capricious rainfall and monsoon failure extensive precipitation. Environment can be monitored with the help of various geographic information systems (GIS) based algorithms i.e. SC (Single), DA (Dual Angle), Mao, Sobrino and SW (Split Window). Estimation of LST is very much possible from digital image processing of satellite imagery. This paper will encompass extraction of LST of Abbottabad using SW technique of GIS and Remote Sensing over last ten years by means of Landsat 7 ETM+ (Environmental Thematic Mapper) and Landsat 8 vide their Thermal Infrared (TIR Sensor) and Optical Land Imager (OLI sensor less Landsat 7 ETM+) having 100 m TIR resolution and 30 m Spectral Resolutions. These sensors have two TIR bands each; their emissivity and spectral radiance will be used as input statistics in SW algorithm for LST extraction. Emissivity will be derived from Normalized Difference Vegetation Index (NDVI) threshold methods using 2-5 bands of OLI with the help of e-cognition software, and spectral radiance will be extracted TIR Bands (Band 10-11 and Band 6 of Landsat 7 ETM+). Accuracy of results will be evaluated by weather data as well. The successive research will have a significant role for all tires of governing bodies related to climate change departments.Keywords: environment, Landsat 8, SW Algorithm, TIR
Procedia PDF Downloads 3552010 Adverse Reactions from Contrast Media in Patients Undergone Computed Tomography at the Department of Radiology, Srinagarind Hospital
Authors: Pranee Suecharoen, Jaturat Kanpittaya
Abstract:
Background: The incidence of adverse reactions to iodinated contrast media has risen. The dearth of reports on reactions to the administration of iso- and low-osmolar contrast media should be addressed. We, therefore, studied the profile of adverse reactions to iodinated contrast media; viz., (a) the body systems affected (b) causality, (c) severity, and (d) preventability. Objective: To study adverse reactions (causes and severity) to iodinated contrast media at Srinagarind Hospital. Method: Between March and July, 2015, 1,101 patients from the Department of Radiology were observed and interviewed for the occurrence of adverse reactions. The patients were classified per Naranjo’s algorithm and through use of an adverse reactions questionnaire. Results: A total of 105 cases (9.5%) reported adverse reactions (57% male; 43% female); among whom 2% were iso-osmolar vs. 98% low-osmolar. Diagnoses included hepatoma and cholangiocarcinoma (24.8%), colorectal cancer (9.5%), breast cancer (5.7%), cervical cancer (3.8%), lung cancer (2.9%), bone cancer (1.9%), and others (51.5%). Underlying diseases included hypertension and diabetes mellitus type 2. Mild, moderate, and severe adverse reactions accounted for 92, 5 and 3%, respectively. The respective groups of escalating symptoms included (a) mild urticaria, itching, rash, nausea, vomiting, dizziness, and headache; (b) moderate hypertension, hypotension, dyspnea, tachycardia and bronchospasm; and (c) severe laryngeal edema, profound hypotension, and convulsions. All reactions could be anticipated per Naranjo’s algorithm. Conclusion: Mild to moderate adverse reactions to low-osmolar contrast media were most common and these occurred immediately after administration. For patient safety and better outcomes, improving the identification of patients likely to have an adverse reaction is essential.Keywords: adverse reactions, contrast media, computed tomography, iodinated contrast agents
Procedia PDF Downloads 3622009 Housing Prices and Travel Costs: Insights from Origin-Destination Demand Estimation in Taiwan’s Science Parks
Authors: Kai-Wei Ji, Dung-Ying Lin
Abstract:
This study investigates the impact of transportation on housing prices in regions surrounding Taiwan's science parks. As these parks evolve into crucial economic and population growth centers, they attract an increasing number of residents and workers, significantly influencing local housing markets. This demographic shift raises important questions about the role of transportation in shaping real estate values. Our research examines four major science parks in Taiwan, providing a comparative analysis of how transportation conditions and population dynamics interact to affect housing price premiums. We employ an origin-destination (OD) matrix derived from pervasive traffic data to model travel patterns and their effects on real estate values. The methodology utilizes a bi-level framework: a genetic algorithm optimizes OD demand estimation at the upper level, while a user equilibrium (UE) model simulates traffic flow at the lower level. This approach enables a nuanced exploration of how population growth impacts transportation conditions and housing price premiums. By analyzing the interplay between travel costs based on OD demand estimation and housing prices, we offer valuable insights for urban planners and policymakers. These findings are crucial for informed decision-making in rapidly developing areas, where understanding the relationship between mobility and real estate values is essential for sustainable urban development.Keywords: demand estimation, genetic algorithm, housing price, transportation
Procedia PDF Downloads 242008 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video
Authors: Nidhal Azawi
Abstract:
Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter
Procedia PDF Downloads 1162007 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 1202006 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 3632005 Refining Scheme Using Amphibious Epistemologies
Authors: David Blaine, George Raschbaum
Abstract:
The evaluation of DHCP has synthesized SCSI disks, and current trends suggest that the exploration of e-business that would allow for further study into robots will soon emerge. Given the current status of embedded algorithms, hackers worldwide obviously desire the exploration of replication, which embodies the confusing principles of programming languages. In our research we concentrate our efforts on arguing that erasure coding can be made "fuzzy", encrypted, and game-theoretic.Keywords: SCHI disks, robot, algorithm, hacking, programming language
Procedia PDF Downloads 4312004 Diabetes Mellitus and Blood Glucose Variability Increases the 30-day Readmission Rate after Kidney Transplantation
Authors: Harini Chakkera
Abstract:
Background: Inpatient hyperglycemia is an established independent risk factor among several patient cohorts with hospital readmission. This has not been studied after kidney transplantation. Nearly one-third of patients who have undergone a kidney transplant reportedly experience 30-day readmission. Methods: Data on first-time solitary kidney transplantations were retrieved between September 2015 to December 2018. Information was linked to the electronic health record to determine a diagnosis of diabetes mellitus and extract glucometeric and insulin therapy data. Univariate logistic regression analysis and the XGBoost algorithm were used to predict 30-day readmission. We report the average performance of the models on the testing set on five bootstrapped partitions of the data to ensure statistical significance. Results: The cohort included 1036 patients who received kidney transplantation, and 224 (22%) experienced 30-day readmission. The machine learning algorithm was able to predict 30-day readmission with an average AUC of 77.3% (95% CI 75.30-79.3%). We observed statistically significant differences in the presence of pretransplant diabetes, inpatient-hyperglycemia, inpatient-hypoglycemia, and minimum and maximum glucose values among those with higher 30-day readmission rates. The XGBoost model identified the index admission length of stay, presence of hyper- and hypoglycemia and recipient and donor BMI values as the most predictive risk factors of 30-day readmission. Additionally, significant variations in the therapeutic management of blood glucose by providers were observed. Conclusions: Suboptimal glucose metrics during hospitalization after kidney transplantation is associated with an increased risk for 30-day hospital readmission. Optimizing the hospital blood glucose management, a modifiable factor, after kidney transplantation may reduce the risk of 30-day readmission.Keywords: kidney, transplant, diabetes, insulin
Procedia PDF Downloads 932003 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model
Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li
Abstract:
Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model
Procedia PDF Downloads 1502002 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 1202001 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 872000 A Visualization Classification Method for Identifying the Decayed Citrus Fruit Infected by Fungi Based on Hyperspectral Imaging
Authors: Jiangbo Li, Wenqian Huang
Abstract:
Early detection of fungal infection in citrus fruit is one of the major problems in the postharvest commercialization process. The automatic and nondestructive detection of infected fruits is still a challenge for the citrus industry. At present, the visual inspection of rotten citrus fruits is commonly performed by workers through the ultraviolet induction fluorescence technology or manual sorting in citrus packinghouses to remove fruit subject with fungal infection. However, the former entails a number of problems because exposing people to this kind of lighting is potentially hazardous to human health, and the latter is very inefficient. Orange is used as a research object. This study would focus on this problem and proposed an effective method based on Vis-NIR hyperspectral imaging in the wavelength range of 400-1000 nm with a spectroscopic resolution of 2.8 nm. In this work, three normalization approaches are applied prior to analysis to reduce the effect of sample curvature on spectral profiles, and it is found that mean normalization was the most effective pretreatment for decreasing spectral variability due to curvature. Then, principal component analysis (PCA) was applied to a dataset composing of average spectra from decayed and normal tissue to reduce the dimensionality of data and observe the ability of Vis-NIR hyper-spectra to discriminate data from two classes. In this case, it was observed that normal and decayed spectra were separable along the resultant first principal component (PC1) axis. Subsequently, five wavelengths (band) centered at 577, 702, 751, 808, and 923 nm were selected as the characteristic wavelengths by analyzing the loadings of PC1. A multispectral combination image was generated based on five selected characteristic wavelength images. Based on the obtained multispectral combination image, the intensity slicing pseudocolor image processing method is used to generate a 2-D visual classification image that would enhance the contrast between normal and decayed tissue. Finally, an image segmentation algorithm for detection of decayed fruit was developed based on the pseudocolor image coupled with a simple thresholding method. For the investigated 238 independent set samples including infected fruits infected by Penicillium digitatum and normal fruits, the total success rate is 100% and 97.5%, respectively, and, the proposed algorithm also used to identify the orange infected by penicillium italicum with a 100% identification accuracy, indicating that the proposed multispectral algorithm here is an effective method and it is potential to be applied in citrus industry.Keywords: citrus fruit, early rotten, fungal infection, hyperspectral imaging
Procedia PDF Downloads 3041999 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population
Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath
Abstract:
Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics
Procedia PDF Downloads 1641998 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization
Procedia PDF Downloads 1461997 Optimizing Volume Fraction Variation Profile of Bidirectional Functionally Graded Circular Plate under Mechanical Loading to Minimize Its Stresses
Authors: Javad Jamali Khouei, Mohammadreza Khoshravan
Abstract:
Considering that application of functionally graded material is increasing in most industries, it seems necessary to present a methodology for designing optimal profile of structures such as plate under mechanical loading which is highly consumed in industries. Therefore, volume fraction variation profile of functionally graded circular plate which has been considered two-directional is optimized so that stress of structure is minimized. For this purpose, equilibrium equations of two-directional functionally graded circular plate are solved by applying semi analytical-numerical method under mechanical loading and support conditions. By solving equilibrium equations, deflections and stresses are obtained in terms of control variables of volume fraction variation profile. As a result, the problem formula can be defined as an optimization problem by aiming at minimization of critical von-mises stress under constraints of deflections, stress and a physical constraint relating to structure of material. Then, the related problem can be solved with help of one of the metaheuristic algorithms such as genetic algorithm. Results of optimization for the applied model under constraints and loadings and boundary conditions show that functionally graded plate should be graded only in radial direction and there is no need for volume fraction variation of the constituent particles in thickness direction. For validating results, optimal values of the obtained design variables are graphically evaluated.Keywords: two-directional functionally graded material, single objective optimization, semi analytical-numerical solution, genetic algorithm, graphical solution with contour
Procedia PDF Downloads 2801996 Application of Particle Swarm Optimization to Thermal Sensor Placement for Smart Grid
Authors: Hung-Shuo Wu, Huan-Chieh Chiu, Xiang-Yao Zheng, Yu-Cheng Yang, Chien-Hao Wang, Jen-Cheng Wang, Chwan-Lu Tseng, Joe-Air Jiang
Abstract:
Dynamic Thermal Rating (DTR) provides crucial information by estimating the ampacity of transmission lines to improve power dispatching efficiency. To perform the DTR, it is necessary to install on-line thermal sensors to monitor conductor temperature and weather variables. A simple and intuitive strategy is to allocate a thermal sensor to every span of transmission lines, but the cost of sensors might be too high to bear. To deal with the cost issue, a thermal sensor placement problem must be solved. This research proposes and implements a hybrid algorithm which combines proper orthogonal decomposition (POD) with particle swarm optimization (PSO) methods. The proposed hybrid algorithm solves a multi-objective optimization problem that concludes the minimum number of sensors and the minimum error on conductor temperature, and the optimal sensor placement is determined simultaneously. The data of 345 kV transmission lines and the hourly weather data from the Taiwan Power Company and Central Weather Bureau (CWB), respectively, are used by the proposed method. The simulated results indicate that the number of sensors could be reduced using the optimal placement method proposed by the study and an acceptable error on conductor temperature could be achieved. This study provides power companies with a reliable reference for efficiently monitoring and managing their power grids.Keywords: dynamic thermal rating, proper orthogonal decomposition, particle swarm optimization, sensor placement, smart grid
Procedia PDF Downloads 4341995 Airborne SAR Data Analysis for Impact of Doppler Centroid on Image Quality and Registration Accuracy
Authors: Chhabi Nigam, S. Ramakrishnan
Abstract:
This paper brings out the analysis of the airborne Synthetic Aperture Radar (SAR) data to study the impact of Doppler centroid on Image quality and geocoding accuracy from the perspective of Stripmap mode of data acquisition. Although in Stripmap mode of data acquisition radar beam points at 90 degrees broad side (side looking), shift in the Doppler centroid is invariable due to platform motion. In-accurate estimation of Doppler centroid leads to poor image quality and image miss-registration. The effect of Doppler centroid is analyzed in this paper using multiple sets of data collected from airborne platform. Occurrences of ghost (ambiguous) targets and their power levels have been analyzed that impacts appropriate choice of PRF. Effect of aircraft attitudes (roll, pitch and yaw) on the Doppler centroid is also analyzed with the collected data sets. Various stages of the RDA (Range Doppler Algorithm) algorithm used for image formation in Stripmap mode, range compression, Doppler centroid estimation, azimuth compression, range cell migration correction are analyzed to find the performance limits and the dependence of the imaging geometry on the final image. The ability of Doppler centroid estimation to enhance the imaging accuracy for registration are also illustrated in this paper. The paper also tries to bring out the processing of low squint SAR data, the challenges and the performance limits imposed by the imaging geometry and the platform dynamics on the final image quality metrics. Finally, the effect on various terrain types, including land, water and bright scatters is also presented.Keywords: ambiguous target, Doppler Centroid, image registration, Airborne SAR
Procedia PDF Downloads 2181994 Optimal Power Distribution and Power Trading Control among Loads in a Smart Grid Operated Industry
Authors: Vivek Upadhayay, Siddharth Deshmukh
Abstract:
In recent years utilization of renewable energy sources has increased majorly because of the increase in global warming concerns. Organization these days are generally operated by Micro grid or smart grid on a small level. Power optimization and optimal load tripping is possible in a smart grid based industry. In any plant or industry loads can be divided into different categories based on their importance to the plant and power requirement pattern in the working days. Coming up with an idea to divide loads in different such categories and providing different power management algorithm to each category of load can reduce the power cost and can come handy in balancing stability and reliability of power. An objective function is defined which is subjected to a variable that we are supposed to minimize. Constraint equations are formed taking difference between the power usages pattern of present day and same day of previous week. By considering the objectives of minimal load tripping and optimal power distribution the proposed problem formulation is a multi-object optimization problem. Through normalization of each objective function, the multi-objective optimization is transformed to single-objective optimization. As a result we are getting the optimized values of power required to each load for present day by use of the past values of the required power for the same day of last week. It is quite a demand response scheduling of power. These minimized values then will be distributed to each load through an algorithm used to optimize the power distribution at a greater depth. In case of power storage exceeding the power requirement, profit can be made by selling exceeding power to the main grid.Keywords: power flow optimization, power trading enhancement, smart grid, multi-object optimization
Procedia PDF Downloads 5271993 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption
Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu
Abstract:
By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture
Procedia PDF Downloads 3791992 Optimizing CNC Production Line Efficiency Using NSGA-II: Adaptive Layout and Operational Sequence for Enhanced Manufacturing Flexibility
Authors: Yi-Ling Chen, Dung-Ying Lin
Abstract:
In the manufacturing process, computer numerical control (CNC) machining plays a crucial role. CNC enables precise machinery control through computer programs, achieving automation in the production process and significantly enhancing production efficiency. However, traditional CNC production lines often require manual intervention for loading and unloading operations, which limits the production line's operational efficiency and production capacity. Additionally, existing CNC automation systems frequently lack sufficient intelligence and fail to achieve optimal configuration efficiency, resulting in the need for substantial time to reconfigure production lines when producing different products, thereby impacting overall production efficiency. Using the NSGA-II algorithm, we generate production line layout configurations that consider field constraints and select robotic arm specifications from an arm list. This allows us to calculate loading and unloading times for each job order, perform demand allocation, and assign processing sequences. The NSGA-II algorithm is further employed to determine the optimal processing sequence, with the aim of minimizing demand completion time and maximizing average machine utilization. These objectives are used to evaluate the performance of each layout, ultimately determining the optimal layout configuration. By employing this method, it enhance the configuration efficiency of CNC production lines and establish an adaptive capability that allows the production line to respond promptly to changes in demand. This will minimize production losses caused by the need to reconfigure the layout, ensuring that the CNC production line can maintain optimal efficiency even when adjustments are required due to fluctuating demands.Keywords: evolutionary algorithms, multi-objective optimization, pareto optimality, layout optimization, operations sequence
Procedia PDF Downloads 251991 Reinforcement-Learning Based Handover Optimization for Cellular Unmanned Aerial Vehicles Connectivity
Authors: Mahmoud Almasri, Xavier Marjou, Fanny Parzysz
Abstract:
The demand for services provided by Unmanned Aerial Vehicles (UAVs) is increasing pervasively across several sectors including potential public safety, economic, and delivery services. As the number of applications using UAVs grows rapidly, more and more powerful, quality of service, and power efficient computing units are necessary. Recently, cellular technology draws more attention to connectivity that can ensure reliable and flexible communications services for UAVs. In cellular technology, flying with a high speed and altitude is subject to several key challenges, such as frequent handovers (HOs), high interference levels, connectivity coverage holes, etc. Additional HOs may lead to “ping-pong” between the UAVs and the serving cells resulting in a decrease of the quality of service and energy consumption. In order to optimize the number of HOs, we develop in this paper a Q-learning-based algorithm. While existing works focus on adjusting the number of HOs in a static network topology, we take into account the impact of cells deployment for three different simulation scenarios (Rural, Semi-rural and Urban areas). We also consider the impact of the decision distance, where the drone has the choice to make a switching decision on the number of HOs. Our results show that a Q-learning-based algorithm allows to significantly reduce the average number of HOs compared to a baseline case where the drone always selects the cell with the highest received signal. Moreover, we also propose which hyper-parameters have the largest impact on the number of HOs in the three tested environments, i.e. Rural, Semi-rural, or Urban.Keywords: drones connectivity, reinforcement learning, handovers optimization, decision distance
Procedia PDF Downloads 1101990 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 1321989 Artificial Intelligence Based Online Monitoring System for Cardiac Patient
Authors: Syed Qasim Gilani, Muhammad Umair, Muhammad Noman, Syed Bilawal Shah, Aqib Abbasi, Muhammad Waheed
Abstract:
Cardiovascular Diseases(CVD's) are the major cause of death in the world. The main reason for these deaths is the unavailability of first aid for heart failure. In many cases, patients die before reaching the hospital. We in this paper are presenting innovative online health service for Cardiac Patients. The proposed online health system has two ends. Users through device developed by us can communicate with their doctor through a mobile application. This interface provides them with first aid.Also by using this service, they have an easy interface with their doctors for attaining medical advice. According to the proposed system, we developed a device called Cardiac Care. Cardiac Care is a portable device which a patient can use at their home for monitoring heart condition. When a patient checks his/her heart condition, Electrocardiogram (ECG), Blood Pressure(BP), Temperature are sent to the central database. The severity of patients condition is checked using Artificial Intelligence Algorithm at the database. If the patient is suffering from the minor problem, our algorithm will suggest a prescription for patients. But if patient's condition is severe, patients record is sent to doctor through the mobile Android application. Doctor after reviewing patients condition suggests next step. If a doctor identifies the patient condition as critical, then the message is sent to the central database for sending an ambulance for the patient. Ambulance starts moving towards patient for bringing him/her to hospital. We have implemented this model at prototype level. This model will be life-saving for millions of people around the globe. According to this proposed model patients will be in contact with their doctors all the time.Keywords: cardiovascular disease, classification, electrocardiogram, blood pressure
Procedia PDF Downloads 1851988 Improvement of Data Transfer over Simple Object Access Protocol (SOAP)
Authors: Khaled Ahmed Kadouh, Kamal Ali Albashiri
Abstract:
This paper presents a designed algorithm involves improvement of transferring data over Simple Object Access Protocol (SOAP). The aim of this work is to establish whether using SOAP in exchanging XML messages has any added advantages or not. The results showed that XML messages without SOAP take longer time and consume more memory, especially with binary data.Keywords: JAX-WS, SMTP, SOAP, web service, XML
Procedia PDF Downloads 4961987 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm
Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi
Abstract:
To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm
Procedia PDF Downloads 2391986 Advanced Mouse Cursor Control and Speech Recognition Module
Authors: Prasad Kalagura, B. Veeresh kumar
Abstract:
We constructed an interface system that would allow a similarly paralyzed user to interact with a computer with almost full functional capability. A real-time tracking algorithm is implemented based on adaptive skin detection and motion analysis. The clicking of the mouse is activated by the user's eye blinking through a sensor. The keyboard function is implemented by voice recognition kit.Keywords: embedded ARM7 processor, mouse pointer control, voice recognition
Procedia PDF Downloads 5791985 Hardware-In-The-Loop Relative Motion Control: Theory, Simulation and Experimentation
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper presents a Guidance and Control (G&C) strategy to address spacecraft maneuvering problem for future Rendezvous and Docking (RVD) missions. The proposed strategy allows safe and propellant efficient trajectories for space servicing missions including tasks such as approaching, inspecting and capturing. This work provides the validation test results of the G&C laws using a Hardware-In-the-Loop (HIL) setup with two robotic mockups representing the chaser and the target spacecraft. Through this paper, the challenges of the relative motion control in space are first summarized, and in particular, the constraints imposed by the mission, spacecraft and, onboard processing capabilities. Second, the proposed algorithm is introduced by presenting the formulation of constrained Model Predictive Control (MPC) to optimize the fuel consumption and explicitly handle the physical and geometric constraints in the system, e.g. thruster or Line-Of-Sight (LOS) constraints. Additionally, the coupling between translational motion and rotational motion is addressed via dual quaternion based kinematic description and accordingly explained. The resulting convex optimization problem allows real-time implementation capability based on a detailed discussion on the computational time requirements and the obtained results with respect to the onboard computer and future trends of space processors capabilities. Finally, the performance of the algorithm is presented in the scope of a potential future mission and of the available equipment. The results also cover a comparison between the proposed algorithms with Linear–quadratic regulator (LQR) based control law to highlight the clear advantages of the MPC formulation.Keywords: autonomous vehicles, embedded optimization, real-time experiment, rendezvous and docking, space robotics
Procedia PDF Downloads 1251984 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer
Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom
Abstract:
Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN
Procedia PDF Downloads 79