Search results for: game-learning approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4965

Search results for: game-learning approach

405 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 476
404 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: Goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, type-I error, penalized quasi-likelihood, power, quasi-likelihood.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
403 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: Computer vision, deep learning, object detection, semiconductor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747
402 Forest Risk and Vulnerability Assessment: A Case Study from East Bokaro Coal Mining Area in India

Authors: Sujata Upgupta, Prasoon Kumar Singh

Abstract:

The expansion of large scale coal mining into forest areas is a potential hazard for the local biodiversity and wildlife. The objective of this study is to provide a picture of the threat that coal mining poses to the forests of the East Bokaro landscape. The vulnerable forest areas at risk have been assessed and the priority areas for conservation have been presented. The forested areas at risk in the current scenario have been assessed and compared with the past conditions using classification and buffer based overlay approach. Forest vulnerability has been assessed using an analytical framework based on systematic indicators and composite vulnerability index values. The results indicate that more than 4 km2 of forests have been lost from 1973 to 2016. Large patches of forests have been diverted for coal mining projects. Forests in the northern part of the coal field within 1-3 km radius around the coal mines are at immediate risk. The original contiguous forests have been converted into fragmented and degraded forest patches. Most of the collieries are located within or very close to the forests thus threatening the biodiversity and hydrology of the surrounding regions. Based on the vulnerability values estimated, it was concluded that more than 90% of the forested grids in East Bokaro are highly vulnerable to mining. The forests in the sub-districts of Bermo and Chandrapura have been identified as the most vulnerable to coal mining activities. This case study would add to the capacity of the forest managers and mine managers to address the risk and vulnerability of forests at a small landscape level in order to achieve sustainable development.

Keywords: Coal mining, forest, indicators, vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1119
401 Emerging Technologies in European Aeronautics: How Collaborative Innovation Efforts are Shaping the Industry

Authors: Nikola Radovanovic, Petros Gkotsis, Mathieu Doussineau

Abstract:

Aeronautics is regarded as a strategically important sector for European competitiveness. It was at the heart of European entrepreneurial development since the industry was born. Currently, the EU is the world leader in the production of civil aircraft, including helicopters, aircraft engines, parts, and components. It is recording a surplus in trade relating to aerospace products, which are exported all over the globe. Also, this industry shows above-average investments in research and development, as demonstrated in the patent activity in this area. The post-pandemic recovery of the industry will partly depend on the possibilities to streamline collaboration in further research and innovation activities. Aeronautics features as one of the often-selected priority domains in smart specialisation, which represents the main regional and national approach in developing and implementing innovation policies in Europe. The basis for the selection of priority domains for smart specialisation lies in the mapping of innovative potential, with research and patent activities being among the key elements of this analysis. This research is aimed at identifying characteristics of the trends in research and patent activities in the regions and countries that base their competitiveness on the aeronautics sector. It is also aimed at determining the scope and patterns of collaborations in aeronautics between innovators from the European regions, focusing on revealing new technology areas that emerge from these collaborations. For this purpose, we developed a methodology based on desk research and the analysis of the PATSTAT patent database as well as the databases of R&I framework programmes.

Keywords: aeronautics, smart specialisation, innovation policy, regional policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 159
400 A Survey of WhatsApp as a Tool for Instructor-Learner Dialogue, Learner-Content Dialogue, and Learner-Learner Dialogue

Authors: Ebrahim Panah, Muhammad Yasir Babar

Abstract:

Thanks to the development of online technology and social networks, people are able to communicate as well as learn. WhatsApp is a popular social network which is growingly gaining popularity. This app can be used for communication as well as education. It can be used for instructor-learner, learner-learner, and learner-content interactions; however, very little knowledge is available on these potentials of WhatsApp. The current study was undertaken to investigate university students’ perceptions of WhatsApp used as a tool for instructor-learner dialogue, learner-content dialogue, and learner-learner dialogue. The study adopted a survey approach and distributed the questionnaire developed by Google Forms to 54 (11 males and 43 females) university students. The obtained data were analyzed using SPSS version 20. The result of data analysis indicates that students have positive attitudes towards WhatsApp as a tool for Instructor-Learner Dialogue: it easy to reach the lecturer (4.07), the instructor gives me valuable feedback on my assignment (4.02), the instructor is supportive during course discussion and offers continuous support with the class (4.00). Learner-Content Dialogue: WhatsApp allows me to academically engage with lecturers anytime, anywhere (4.00), it helps to send graphics such as pictures or charts directly to the students (3.98), it also provides out of class, extra learning materials and homework (3.96), and Learner-Learner Dialogue: WhatsApp is a good tool for sharing knowledge with others (4.09), WhatsApp allows me to academically engage with peers anytime, anywhere (4.07), and we can interact with others through the use of group discussion (4.02). It was also found that there are significant positive correlations between students’ perceptions of Instructor-Learner Dialogue (ILD), Learner-Content Dialogue (LCD), Learner-Learner Dialogue (LLD) and WhatsApp Application in classroom. The findings of the study have implications for lectures, policy makers and curriculum developers.

Keywords: Instructor-learner dialogue, learners-contents dialogue, learner-learner dialogue, WhatsApp.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593
399 Support Vector Regression for Retrieval of Soil Moisture Using Bistatic Scatterometer Data at X-Band

Authors: Dileep Kumar Gupta, Rajendra Prasad, Pradeep Kumar, Varun Narayan Mishra, Ajeet Kumar Vishwakarma, Prashant Kumar Srivastava

Abstract:

An approach was evaluated for the retrieval of soil moisture of bare soil surface using bistatic scatterometer data in the angular range of 200 to 700 at VV- and HH- polarization. The microwave data was acquired by specially designed X-band (10 GHz) bistatic scatterometer. The linear regression analysis was done between scattering coefficients and soil moisture content to select the suitable incidence angle for retrieval of soil moisture content. The 250 incidence angle was found more suitable. The support vector regression analysis was used to approximate the function described by the input output relationship between the scattering coefficient and corresponding measured values of the soil moisture content. The performance of support vector regression algorithm was evaluated by comparing the observed and the estimated soil moisture content by statistical performance indices %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE). The values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 2.9451, 1.0986 and 0.9214 respectively at HHpolarization. At VV- polarization, the values of %Bias, root mean squared error (RMSE) and Nash-Sutcliffe Efficiency (NSE) were found 3.6186, 0.9373 and 0.9428 respectively.

Keywords: Bistatic scatterometer, soil moisture, support vector regression, RMSE, %Bias, NSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3171
398 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach

Authors: Rajvir Kaur, Jeewani Anupama Ginige

Abstract:

With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.

Keywords: Artificial neural networks, breast cancer, cancer dataset, classifiers, cervical cancer, F-score, logistic regression, machine learning, precision, recall, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
397 The Problems of Legal Regulation of Intellectual Property Rights in Innovation Activities in Russia (Institutional Approach)

Authors: Zhanna Mingaleva, Irina Mirskikh

Abstract:

Part IV of the Civil Code of the Russian Federation dedicated to legal regulation of Intellectual property rights came into force in 2008. It is a first attempt of codification in Intellectual property sphere in Russia. That is why a lot of new norms appeared. The main problem of the Russian Civil Code (part IV) is that many rules (norms of Law) contradict the norms of International Intellectual property Law (i.e. protection of inventions, creations, ideas, know-how, trade secrets, innovations). Intellectual property rights protect innovations and creations and reward innovative and creative activity. Intellectual property rights are international in character and in that respect they fit in rather well with the economic reality of the global economy. Inventors prefer not to take out a patent for inventions because it is a very difficult procedure, it takes a lot of time and is very expensive. That-s why they try to protect their inventions as ideas, know-how, confidential information. An idea is the main element of any object of Intellectual property (creation, invention, innovation, know-how, etc.). But ideas are not protected by Civil Code of Russian Federation. The aim of the paper is to reveal the main problems of legal regulation of Intellectual property in Russia and to suggest possible solutions. The authors of this paper have raised these essential issues through different activities. Through the panel survey, questionnaires which were spread among the participants of intellectual activities the main problems of implementation of innovations, protecting of the ideas and know-how were identified. The implementation of research results will help to solve economic and legal problems of innovations, transfer of innovations and intellectual property.1

Keywords: Innovation activities, intellectual property rights, know-how, patents, indicators of innovation activities

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
396 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry

Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić

Abstract:

In this work the concentration of steepwater from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steepwater by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentate were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steepwater, which were obtained from the starch wet milling plant „Jabuka“ Pancevo. The procedure of ultrafiltration on a single-channel 250mm lenght, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steepwater ultrafiltration, the change of permeate flux, dry matter content of permeate and retentate, as well as the absorbance changes of the permeate and retentate were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentate is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steepwater ultrafiltration in permeate stays 40% less dry matter compared to the feed.

Keywords: Ultrafiltration, steepwater, starch industry, ceramic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
395 Profile Calculation in Water Phantom of Symmetric and Asymmetric Photon Beam

Authors: N. Chegeni, M. J. Tahmasebi Birgani

Abstract:

Nowadays, in most radiotherapy departments, the commercial treatment planning systems (TPS) used to calculate dose distributions needs to be verified; therefore, quick, easy-to-use and low cost dose distribution algorithms are desirable to test and verify the performance of the TPS. In this paper, we put forth an analytical method to calculate the phantom scatter contribution and depth dose on the central axis based on the equivalent square concept. Then, this method was generalized to calculate the profiles at any depth and for several field shapes regular or irregular fields under symmetry and asymmetry photon beam conditions. Varian 2100 C/D and Siemens Primus Plus Linacs with 6 and 18 MV photon beam were used for irradiations. Percentage depth doses (PDDs) were measured for a large number of square fields for both energies, and for 45º wedges which were employed to obtain the profiles in any depth. To assess the accuracy of the calculated profiles, several profile measurements were carried out for some treatment fields. The calculated and measured profiles were compared by gamma-index calculation. All γ–index calculations were based on a 3% dose criterion and a 3 mm dose-to-agreement (DTA) acceptance criterion. The γ values were less than 1 at most points. However, the maximum γ observed was about 1.10 in the penumbra region in most fields and in the central area for the asymmetric fields. This analytical approach provides a generally quick and fairly accurate algorithm to calculate dose distribution for some treatment fields in conventional radiotherapy.

Keywords: Dose distribution, equivalent field, asymmetric field, irregular field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2990
394 Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision

Authors: Nastaran Hajiheydari, Mohammad Soltani Delgosha

Abstract:

Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF GDM environment. The present study intends to develop an IF VIKOR method in a GDM situation. Therefore, a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.

Keywords: Group decision making, intuitionistic fuzzy entropy measure, intuitionistic fuzzy set, vendor selection VIKOR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 680
393 An Embedded System for Artificial Intelligence Applications

Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou

Abstract:

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5043
392 Auditor with the Javanese Characters and Non Javanese in Audit Firm: Conflict of Interest

Authors: Krisna Damayanti, Lilis Ardini

Abstract:

Many issues about the relationship between auditors in auditing practices with its stakeholders often heard. It appears in perspectives of bringing out the variety of phenomena affecting from the audit practice of greed and not appreciating from the independency of the audit profession and professional code of ethics. It becomes a logical consequence in practicing of capitalism in accounting. The main purpose of this article would like to uncover the existing auditing practices in Indonesia, especially in Java that associated with a strong influence of Javanese culture with reluctant /”shy", politely, "legowo (gratefully accepted)", "ngemong" (friendly), "not mentholo" (lenient), "tepo seliro" (tolerance), "ngajeni" (respectful), "acquiescent" and also reveals its relationships with Non Javanese culture in facing the conflict of interest in practical of auditing world. The method used by interpretive approach that emphasizes the role of language, interpret and understand and see social reality as something other than a label, name or concept. Global practices in auditing of each country have particular cultures that affect the standard set by those regulatory standards results the adaptation of IAS. The majority of parties in Indonesia is dominated by Javanese racial regulators, so Java culture is embedded in every audit practices and those conditions in Java leads auditors in having similar behaviour, sometimes interfere with standard Java code of conduct must be executed by an auditor. Auditors who live in Java have the characters of Javanese culture that is hard to avoid in the audit practice. However, practically, the auditors still are relevant in their profession.

Keywords: Auditors, java, character, profession, code of ethics, client.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2281
391 Human Trafficking: The Kosovar Perspective of Fighting the Phenomena through Police and Civil Society Cooperation

Authors: Samedin Mehmeti

Abstract:

The rationale behind this study is considering combating and preventing the phenomenon of trafficking in human beings from a multidisciplinary perspective that involves many layers of the society. Trafficking in human beings is an abhorrent phenomenon highly affecting negatively the victims and their families in both human and material aspect, sometimes causing irreversible damages. The longer term effects of this phenomenon, in countries with a weak economic development and extremely young and dynamic population, such as Kosovo, without proper measures to prevented and control can cause tremendous damages in the society. Given the fact that a complete eradication of this phenomenon is almost impossible, efforts should be concentrated at least on the prevention and controlling aspects. Treating trafficking in human beings based on traditional police tactics, methods and proceedings cannot bring satisfactory results. There is no doubt that a multi-disciplinary approach is an irreplaceable requirement, in other words, a combination of authentic and functional proactive and reactive methods, techniques and tactics. Obviously, police must exercise its role in preventing and combating trafficking in human beings, a role sanctioned by the law, however, police role and contribution cannot by any means considered complete if all segments of the society are not included in these efforts. Naturally, civil society should have an important share in these collaborative and interactive efforts especially in preventive activities such as: awareness on trafficking risks and damages, proactive engagement in drafting appropriate legislation and strategies, law enforcement monitoring and direct or indirect involvement in protective and supporting activities which benefit the victims of trafficking etc.

Keywords: Civil society, cooperation, police, trafficking in human beings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
390 Membrane Distillation Process Modeling: Dynamical Approach

Authors: Fadi Eleiwi, Taous Meriem Laleg-Kirati

Abstract:

This paper presents a complete dynamic modeling of a membrane distillation process. The model contains two consistent dynamic models. A 2D advection-diffusion equation for modeling the whole process and a modified heat equation for modeling the membrane itself. The complete model describes the temperature diffusion phenomenon across the feed, membrane, permeate containers and boundary layers of the membrane. It gives an online and complete temperature profile for each point in the domain. It explains heat conduction and convection mechanisms that take place inside the process in terms of mathematical parameters, and justify process behavior during transient and steady state phases. The process is monitored for any sudden change in the performance at any instance of time. In addition, it assists maintaining production rates as desired, and gives recommendations during membrane fabrication stages. System performance and parameters can be optimized and controlled using this complete dynamic model. Evolution of membrane boundary temperature with time, vapor mass transfer along the process, and temperature difference between membrane boundary layers are depicted and included. Simulations were performed over the complete model with real membrane specifications. The plots show consistency between 2D advection-diffusion model and the expected behavior of the systems as well as literature. Evolution of heat inside the membrane starting from transient response till reaching steady state response for fixed and varying times is illustrated.

Keywords: Membrane distillation, Dynamical modeling, Advection-diffusion equation, Thermal equilibrium, Heat equation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2811
389 Cyber Warriors for Cyber Security and Information Assurance- An Academic Perspective

Authors: Ronald F. Gonzales, Gordon W. Romney, Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha

Abstract:

A virtualized and virtual approach is presented on academically preparing students to successfully engage at a strategic perspective to understand those concerns and measures that are both structured and not structured in the area of cyber security and information assurance. The Master of Science in Cyber Security and Information Assurance (MSCSIA) is a professional degree for those who endeavor through technical and managerial measures to ensure the security, confidentiality, integrity, authenticity, control, availability and utility of the world-s computing and information systems infrastructure. The National University Cyber Security and Information Assurance program is offered as a Master-s degree. The emphasis of the MSCSIA program uniquely includes hands-on academic instruction using virtual computers. This past year, 2011, the NU facility has become fully operational using system architecture to provide a Virtual Education Laboratory (VEL) accessible to both onsite and online students. The first student cohort completed their MSCSIA training this past March 2, 2012 after fulfilling 12 courses, for a total of 54 units of college credits. The rapid pace scheduling of one course per month is immensely challenging, perpetually changing, and virtually multifaceted. This paper analyses these descriptive terms in consideration of those globalization penetration breaches as present in today-s world of cyber security. In addition, we present current NU practices to mitigate risks.

Keywords: Cyber security, information assurance, mitigate risks, virtual machines, strategic perspective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833
388 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 662
387 Study of the Thermal Performance of Bio-Sourced Materials Used as Thermal Insulation in Buildings under Humid Tropical Climate

Authors: Guarry Montrose, Ted Soubdhan

Abstract:

In the fight against climate change, the energy consuming building sector must also be taken into account to solve this problem. In this case thermal insulation of buildings using bio-based materials is an interesting solution. Therefore, the thermal performance of some materials of this type has been studied. The advantages of these natural materials of plant origin are multiple, biodegradable, low economic cost, renewable and readily available. The use of biobased materials is widespread in the building sector in order to replace conventional insulation materials with natural materials. Vegetable fibers are very important because they have good thermal behaviour and good insulating properties. The aim of using bio-sourced materials is in line with the logic of energy control and environmental protection, the approach is to make the inhabitants of the houses comfortable and reduce their energy consumption (energy efficiency). In this research we will present the results of studies carried out on the thermal conductivity of banana leaves, latan leaves, vetivers fibers, palm kernel fibers, sargassum, coconut leaves, sawdust and bulk sugarcane leaves. The study on thermal conductivity was carried out in two ways, on the one hand using the flash method, and on the other hand a so-called hot box experiment was carried out. We will discuss and highlight a number of influential factors such as moisture and air pockets present in the samples on the thermophysical properties of these materials, in particular thermal conductivity. Finally, the result of a thermal performance test of banana leaves on a roof in Haiti will also be presented in this work.

Keywords: Buildings, insulating properties, natural materials of plant origin, thermal performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 873
386 A Specification-Based Approach for Retrieval of Reusable Business Component for Software Reuse

Authors: Meng Fanchao, Zhan Dechen, Xu Xiaofei

Abstract:

Software reuse can be considered as the most realistic and promising way to improve software engineering productivity and quality. Automated assistance for software reuse involves the representation, classification, retrieval and adaptation of components. The representation and retrieval of components are important to software reuse in Component-Based on Software Development (CBSD). However, current industrial component models mainly focus on the implement techniques and ignore the semantic information about component, so it is difficult to retrieve the components that satisfy user-s requirements. This paper presents a method of business component retrieval based on specification matching to solve the software reuse of enterprise information system. First, a business component model oriented reuse is proposed. In our model, the business data type is represented as sign data type based on XML, which can express the variable business data type that can describe the variety of business operations. Based on this model, we propose specification match relationships in two levels: business operation level and business component level. In business operation level, we use input business data types, output business data types and the taxonomy of business operations evaluate the similarity between business operations. In the business component level, we propose five specification matches between business components. To retrieval reusable business components, we propose the measure of similarity degrees to calculate the similarities between business components. Finally, a business component retrieval command like SQL is proposed to help user to retrieve approximate business components from component repository.

Keywords: Business component, business operation, business data type, specification matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
385 Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph

Authors: A. Badoud, M. Khemliche, S. Latreche

Abstract:

The main objective developed in this paper is to find a graphic technique for modeling, simulation and diagnosis of the industrial systems. This importance is much apparent when it is about a complex system such as the nuclear reactor with pressurized water of several form with various several non-linearity and time scales. In this case the analytical approach is heavy and does not give a fast idea on the evolution of the system. The tool Bond Graph enabled us to transform the analytical model into graphic model and the software of simulation SYMBOLS 2000 specific to the Bond Graphs made it possible to validate and have the results given by the technical specifications. We introduce the analysis of the problem involved in the faults localization and identification in the complex industrial processes. We propose a method of fault detection applied to the diagnosis and to determine the gravity of a detected fault. We show the possibilities of application of the new diagnosis approaches to the complex system control. The industrial systems became increasingly complex with the faults diagnosis procedures in the physical systems prove to become very complex as soon as the systems considered are not elementary any more. Indeed, in front of this complexity, we chose to make recourse to Fault Detection and Isolation method (FDI) by the analysis of the problem of its control and to conceive a reliable system of diagnosis making it possible to apprehend the complex dynamic systems spatially distributed applied to the standard pressurized water nuclear reactor.

Keywords: Bond Graph, Modeling, Simulation, Monitoring, Analytical Redundancy Relations, Pressurized Water Reactor, Directed Graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941
384 Genetic Algorithm Based Approach for Actuator Saturation Effect on Nonlinear Controllers

Authors: M. Mohebbi, K. Shakeri

Abstract:

In the real application of active control systems to mitigate the response of structures subjected to sever external excitations such as earthquake and wind induced vibrations, since the capacity of actuators is limited then the actuators saturate. Hence, in designing controllers for linear and nonlinear structures under sever earthquakes, the actuator saturation should be considered as a constraint. In this paper optimal design of active controllers for nonlinear structures by considering the actuator saturation has been studied. To this end a method has been proposed based on defining an optimization problem which considers the minimizing of the maximum displacement of the structure as objective when a limited capacity for actuator has been used as a constraint in optimization problem. To evaluate the effectiveness of the proposed method, a single degree of freedom (SDF) structure with a bilinear hysteretic behavior has been simulated under a white noise ground acceleration of different amplitudes. Active tendon control mechanism, comprised of pre-stressed tendons and an actuator, and extended nonlinear Newmark method based instantaneous optimal control algorithm have been used as active control mechanism and algorithm. To enhance the efficiency of the controllers, the weights corresponding to displacement, velocity, acceleration and control force in the performance index have been found by using the Distributed Genetic Algorithm (DGA). According to the results it has been concluded that the proposed method has been effective in considering the actuator saturation in designing optimal controllers for nonlinear frames. Also it has been shown that the actuator capacity and the average value of required control force are two important factors in designing nonlinear controllers for considering the actuator saturation.

Keywords: Active control, Actuator Saturation, Nonlinear, Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1411
383 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Keywords: Deep learning, artificial neural networks, energy price forecasting, Turkey.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036
382 Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

Authors: Tasneem Halawani, Yamen Khateeb

Abstract:

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Keywords: Automation, customer value, heterogenic, integration, IT services, optimization, processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
381 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 625
380 Deployment of Beyond 4G Wireless Communication Networks with Carrier Aggregation

Authors: Bahram Khan, Anderson Rocha Ramos, Rui R. Paulo, Fernando J. Velez

Abstract:

With the growing demand for a new blend of applications, the users dependency on the internet is increasing day by day. Mobile internet users are giving more attention to their own experiences, especially in terms of communication reliability, high data rates and service stability on move. This increase in the demand is causing saturation of existing radio frequency bands. To address these challenges, researchers are investigating the best approaches, Carrier Aggregation (CA) is one of the newest innovations, which seems to fulfill the demands of the future spectrum, also CA is one the most important feature for Long Term Evolution - Advanced (LTE-Advanced). For this purpose to get the upcoming International Mobile Telecommunication Advanced (IMT-Advanced) mobile requirements (1 Gb/s peak data rate), the CA scheme is presented by 3GPP, which would sustain a high data rate using widespread frequency bandwidth up to 100 MHz. Technical issues such as aggregation structure, its implementations, deployment scenarios, control signal techniques, and challenges for CA technique in LTE-Advanced, with consideration of backward compatibility, are highlighted in this paper. Also, performance evaluation in macro-cellular scenarios through a simulation approach is presented, which shows the benefits of applying CA, low-complexity multi-band schedulers in service quality, system capacity enhancement and concluded that enhanced multi-band scheduler is less complex than the general multi-band scheduler, which performs better for a cell radius longer than 1800 m (and a PLR threshold of 2%).

Keywords: Component carrier, carrier aggregation, LTE-Advanced, scheduling, spectrum management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 492
379 Seismic Vulnerability Assessment of Masonry Buildings in Seismic Prone Regions: The Case of Annaba City, Algeria

Authors: Allaeddine Athmani, Abdelhacine Gouasmia, Tiago Ferreira, Romeu Vicente

Abstract:

Seismic vulnerability assessment of masonry buildings is a fundamental issue even for moderate to low seismic hazard regions. This fact is even more important when dealing with old structures such as those located in Annaba city (Algeria), which the majority of dates back to the French colonial era from 1830. This category of buildings is in high risk due to their highly degradation state, heterogeneous materials and intrusive modifications to structural and non-structural elements. Furthermore, they are usually shelter a dense population, which is exposed to such risk. In order to undertake a suitable seismic risk mitigation strategies and reinforcement process for such structures, it is essential to estimate their seismic resistance capacity at a large scale. In this sense, two seismic vulnerability index methods and damage estimation have been adapted and applied to a pilot-scale building area located in the moderate seismic hazard region of Annaba city: The first one based on the EMS-98 building typologies, and the second one derived from the Italian GNDT approach. To perform this task, the authors took the advantage of an existing data survey previously performed for other purposes. The results obtained from the application of the two methods were integrated and compared using a geographic information system tool (GIS), with the ultimate goal of supporting the city council of Annaba for the implementation of risk mitigation and emergency planning strategies.

Keywords: Annaba city, EMS98 concept, GNDT method, old city center, seismic vulnerability index, unreinforced masonry buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590
378 An Approach to Image Extraction and Accurate Skin Detection from Web Pages

Authors: Moheb R. Girgis, Tarek M. Mahmoud, Tarek Abd-El-Hafeez

Abstract:

This paper proposes a system to extract images from web pages and then detect the skin color regions of these images. As part of the proposed system, using BandObject control, we built a Tool bar named 'Filter Tool Bar (FTB)' by modifying the Pavel Zolnikov implementation. The Yahoo! Team provides us with the Yahoo! SDK API, which also supports image search and is really useful. In the proposed system, we introduced three new methods for extracting images from the web pages (after loading the web page by using the proposed FTB, before loading the web page physically from the localhost, and before loading the web page from any server). These methods overcome the drawback of the regular expressions method for extracting images suggested by Ilan Assayag. The second part of the proposed system is concerned with the detection of the skin color regions of the extracted images. So, we studied two famous skin color detection techniques. The first technique is based on the RGB color space and the second technique is based on YUV and YIQ color spaces. We modified the second technique to overcome the failure of detecting complex image's background by using the saturation parameter to obtain an accurate skin detection results. The performance evaluation of the efficiency of the proposed system in extracting images before and after loading the web page from localhost or any server in terms of the number of extracted images is presented. Finally, the results of comparing the two skin detection techniques in terms of the number of pixels detected are presented.

Keywords: Browser Helper Object, Color spaces, Image and URL extraction, Skin detection, Web Browser events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
377 Promoting Local Products through One Village One Product and Customer Satisfaction

Authors: Wardoyo, Humairoh

Abstract:

In global competition nowadays, the world economy heavily depends upon high technology and capital intensive industries that are mainly owned by well-established economic and developed countries, such as United States of America, United Kingdom, Japan, and South Korea. Indonesia as a developing country is building its economic activities towards industrial country as well, although a slightly different approach was implemented. For example, similar to the concept of one village one product (OVOP) implemented in Japan, Indonesia also adopted this concept by promoting local traditional products to improve incomes of village people and to enhance local economic activities. Analysis on how OVOP program increase local people’s income and influence customer satisfaction were the objective of this paper. Behavioral intention to purchase and re-purchase, customer satisfaction and promotion are key factors for local products to play significant roles in improving local income and economy of the region. The concepts of OVOP and key factors that influence economic activities of local people and the region will be described and explained in the paper. Results of research, in a case study based on 300 respondents, customers of a local restaurant at Tangerang City, Banten Province of Indonesia, indicated that local product, service quality and behavioral intention individually have significant influence to customer satisfaction; whereas simultaneous tests to the variables indicated positive and significant influence to the behavioral intention through customer satisfaction as the intervening variable.

Keywords: Behavioral intention, customer satisfaction, local products, one village one product.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2262
376 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 928