Search results for: data mining techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29314

Search results for: data mining techniques

27424 Presenting a Knowledge Mapping Model According to a Comparative Study on Applied Models and Approaches to Map Organizational Knowledge

Authors: Ahmad Aslizadeh, Farid Ghaderi

Abstract:

Mapping organizational knowledge is an innovative concept and useful instrument of representation, capturing and visualization of implicit and explicit knowledge. There are a diversity of methods, instruments and techniques presented by different researchers following mapping organizational knowledge to reach determined goals. Implicating of these methods, it is necessary to know their exigencies and conditions in which those can be used. Integrating identified methods of knowledge mapping and comparing them would help knowledge managers to select the appropriate methods. This research conducted to presenting a model and framework to map organizational knowledge. At first, knowledge maps, their applications and necessity are introduced because of extracting comparative framework and detection of their structure. At the next step techniques of researchers such as Eppler, Kim, Egbu, Tandukar and Ebner as knowledge mapping models are presented and surveyed. Finally, they compare and a superior model would be introduced.

Keywords: knowledge mapping, knowledge management, comparative study, business and management

Procedia PDF Downloads 396
27423 Virtual Metering and Prediction of Heating, Ventilation, and Air Conditioning Systems Energy Consumption by Using Artificial Intelligence

Authors: Pooria Norouzi, Nicholas Tsang, Adam van der Goes, Joseph Yu, Douglas Zheng, Sirine Maleej

Abstract:

In this study, virtual meters will be designed and used for energy balance measurements of an air handling unit (AHU). The method aims to replace traditional physical sensors in heating, ventilation, and air conditioning (HVAC) systems with simulated virtual meters. Due to the inability to manage and monitor these systems, many HVAC systems have a high level of inefficiency and energy wastage. Virtual meters are implemented and applied in an actual HVAC system, and the result confirms the practicality of mathematical sensors for alternative energy measurement. While most residential buildings and offices are commonly not equipped with advanced sensors, adding, exploiting, and monitoring sensors and measurement devices in the existing systems can cost thousands of dollars. The first purpose of this study is to provide an energy consumption rate based on available sensors and without any physical energy meters. It proves the performance of virtual meters in HVAC systems as reliable measurement devices. To demonstrate this concept, mathematical models are created for AHU-07, located in building NE01 of the British Columbia Institute of Technology (BCIT) Burnaby campus. The models will be created and integrated with the system’s historical data and physical spot measurements. The actual measurements will be investigated to prove the models' accuracy. Based on preliminary analysis, the resulting mathematical models are successful in plotting energy consumption patterns, and it is concluded confidently that the results of the virtual meter will be close to the results that physical meters could achieve. In the second part of this study, the use of virtual meters is further assisted by artificial intelligence (AI) in the HVAC systems of building to improve energy management and efficiency. By the data mining approach, virtual meters’ data is recorded as historical data, and HVAC system energy consumption prediction is also implemented in order to harness great energy savings and manage the demand and supply chain effectively. Energy prediction can lead to energy-saving strategies and considerations that can open a window in predictive control in order to reach lower energy consumption. To solve these challenges, the energy prediction could optimize the HVAC system and automates energy consumption to capture savings. This study also investigates AI solutions possibility for autonomous HVAC efficiency that will allow quick and efficient response to energy consumption and cost spikes in the energy market.

Keywords: virtual meters, HVAC, artificial intelligence, energy consumption prediction

Procedia PDF Downloads 100
27422 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data

Procedia PDF Downloads 396
27421 Rural Water Management Strategies and Irrigation Techniques for Sustainability. Nigeria Case Study; Kwara State

Authors: Faith Eweluegim Enahoro-Ofagbe

Abstract:

Water is essential for sustaining life. As a limited resource, effective water management is vital. Water scarcity has become more common due to the effects of climate change, land degradation, deforestation, and population growth, especially in rural communities, which are more susceptible to water-related issues such as water shortage, water-borne disease, et c., due to the unsuccessful implementation of water policies and projects in Nigeria. Since rural communities generate the majority of agricultural products, they significantly impact on water management for sustainability. The development of methods to advance this goal for residential and agricultural usage in the present and the future is a challenge for rural residents. This study evaluated rural water supply systems and irrigation management techniques to conserve water in Kwara State, North-Central Nigeria. Suggesting some measures to conserve water resources for sustainability, off-season farming, and socioeconomic security that will remedy water degradation, unemployment which is one of the causes of insecurity in the country, by considering the use of fabricated or locally made irrigation equipment, which are affordable by rural farmers, among other recommendations. Questionnaires were distributed to respondents in the study area for quantitative evaluation of irrigation methods practices. For physicochemical investigation, samples were also gathered from their available water sources. According to the study's findings, 30 percent of farmers adopted intelligent irrigation management techniques to conserve water resources, saving 45% of the water previously used for irrigation. 70 % of farmers practice seasonal farming. Irrigation water is drawn from river channels, streams, and unlined and unprotected wells. 60% of these rural residents rely on private boreholes for their water needs, while 40% rely on government-supplied rural water. Therefore, the government must develop additional water projects, raise awareness, and offer irrigation techniques that are simple to adapt for water management, increasing socio-economic productivity, security, and water sustainability.

Keywords: water resource management, sustainability, irrigation, rural water management, irrigation management technique

Procedia PDF Downloads 99
27420 Kinoform Optimisation Using Gerchberg- Saxton Iterative Algorithm

Authors: M. Al-Shamery, R. Young, P. Birch, C. Chatwin

Abstract:

Computer Generated Holography (CGH) is employed to create digitally defined coherent wavefronts. A CGH can be created by using different techniques such as by using a detour-phase technique or by direct phase modulation to create a kinoform. The detour-phase technique was one of the first techniques that was used to generate holograms digitally. The disadvantage of this technique is that the reconstructed image often has poor quality due to the limited dynamic range it is possible to record using a medium with reasonable spatial resolution.. The kinoform (phase-only hologram) is an alternative technique. In this method, the phase of the original wavefront is recorded but the amplitude is constrained to be constant. The original object does not need to exist physically and so the kinoform can be used to reconstruct an almost arbitrary wavefront. However, the image reconstructed by this technique contains high levels of noise and is not identical to the reference image. To improve the reconstruction quality of the kinoform, iterative techniques such as the Gerchberg-Saxton algorithm (GS) are employed. In this paper the GS algorithm is described for the optimisation of a kinoform used for the reconstruction of a complex wavefront. Iterations of the GS algorithm are applied to determine the phase at a plane (with known amplitude distribution which is often taken as uniform), that satisfies given phase and amplitude constraints in a corresponding Fourier plane. The GS algorithm can be used in this way to enhance the reconstruction quality of the kinoform. Different images are employed as the reference object and their kinoform is synthesised using the GS algorithm. The quality of the reconstructed images is quantified to demonstrate the enhanced reconstruction quality achieved by using this method.

Keywords: computer generated holography, digital holography, Gerchberg-Saxton algorithm, kinoform

Procedia PDF Downloads 526
27419 An Empirical Study of the Impacts of Big Data on Firm Performance

Authors: Thuan Nguyen

Abstract:

In the present time, data to a data-driven knowledge-based economy is the same as oil to the industrial age hundreds of years ago. Data is everywhere in vast volumes! Big data analytics is expected to help firms not only efficiently improve performance but also completely transform how they should run their business. However, employing the emergent technology successfully is not easy, and assessing the roles of big data in improving firm performance is even much harder. There was a lack of studies that have examined the impacts of big data analytics on organizational performance. This study aimed to fill the gap. The present study suggested using firms’ intellectual capital as a proxy for big data in evaluating its impact on organizational performance. The present study employed the Value Added Intellectual Coefficient method to measure firm intellectual capital, via its three main components: human capital efficiency, structural capital efficiency, and capital employed efficiency, and then used the structural equation modeling technique to model the data and test the models. The financial fundamental and market data of 100 randomly selected publicly listed firms were collected. The results of the tests showed that only human capital efficiency had a significant positive impact on firm profitability, which highlighted the prominent human role in the impact of big data technology.

Keywords: big data, big data analytics, intellectual capital, organizational performance, value added intellectual coefficient

Procedia PDF Downloads 236
27418 Automated Test Data Generation For some types of Algorithm

Authors: Hitesh Tahbildar

Abstract:

The cost of test data generation for a program is computationally very high. In general case, no algorithm to generate test data for all types of algorithms has been found. The cost of generating test data for different types of algorithm is different. Till date, people are emphasizing the need to generate test data for different types of programming constructs rather than different types of algorithms. The test data generation methods have been implemented to find heuristics for different types of algorithms. Some algorithms that includes divide and conquer, backtracking, greedy approach, dynamic programming to find the minimum cost of test data generation have been tested. Our experimental results say that some of these types of algorithm can be used as a necessary condition for selecting heuristics and programming constructs are sufficient condition for selecting our heuristics. Finally we recommend the different heuristics for test data generation to be selected for different types of algorithms.

Keywords: ongest path, saturation point, lmax, kL, kS

Procedia PDF Downloads 398
27417 Using Hyperspectral Sensor and Machine Learning to Predict Water Potentials of Wild Blueberries during Drought Treatment

Authors: Yongjiang Zhang, Kallol Barai, Umesh R. Hodeghatta, Trang Tran, Vikas Dhiman

Abstract:

Detecting water stress on crops early and accurately is crucial to minimize its impact. This study aims to measure water stress in wild blueberry crops non-destructively by analyzing proximal hyperspectral data. The data collection took place in the summer growing season of 2022. A drought experiment was conducted on wild blueberries in the randomized block design in the greenhouse, incorporating various genotypes and irrigation treatments. Hyperspectral data ( spectral range: 400-1000 nm) using a handheld spectroradiometer and leaf water potential data using a pressure chamber were collected from wild blueberry plants. Machine learning techniques, including multiple regression analysis and random forest models, were employed to predict leaf water potential (MPa). We explored the optimal wavelength bands for simple differences (RY1-R Y2), simple ratios (RY1/RY2), and normalized differences (|RY1-R Y2|/ (RY1-R Y2)). NDWI ((R857 - R1241)/(R857 + R1241)), SD (R2188 – R2245), and SR (R1752 / R1756) emerged as top predictors for predicting leaf water potential, significantly contributing to the highest model performance. The base learner models achieved an R-squared value of approximately 0.81, indicating their capacity to explain 81% of the variance. Research is underway to develop a neural vegetation index (NVI) that automates the process of index development by searching for specific wavelengths in the space ratio of linear functions of reflectance. The NVI framework could work across species and predict different physiological parameters.

Keywords: hyperspectral reflectance, water potential, spectral indices, machine learning, wild blueberries, optimal bands

Procedia PDF Downloads 61
27416 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 122
27415 Integration of Artificial Neural Network with Geoinformatics Technology to Predict Land Surface Temperature within Sun City Jodhpur, Rajasthan, India

Authors: Avinash Kumar Ranjan, Akash Anand

Abstract:

The Land Surface Temperature (LST) is an essential factor accompanying to rise urban heat and climate warming within a city in micro level. It is also playing crucial role in global change study as well as radiation budgets measuring in heat balance studies. The information of LST is very substantial to recognize the urban climatology, ecological changes, anthropological and environmental interactions etc. The Chief motivation of present study focus on time series of ANN model that taken a sequence of LST values of 2000, 2008 and 2016, realize the pattern of variation within the data set and predict the LST values for 2024 and 2032. The novelty of this study centers on evaluation of LST using series of multi-temporal MODIS (MOD 11A2) satellite data by Maximum Value Composite (MVC) techniques. The results derived from this study endorse the proficiency of Geoinformatics Technology with integration of ANN to gain knowledge, understanding and building of precise forecast from the complex physical world database. This study will also focus on influence of Land Use/ Land Cover (LU/LC) variation on Land Surface Temperature.

Keywords: LST, geoinformatics technology, ANN, MODIS satellite imagery, MVC

Procedia PDF Downloads 232
27414 Automatic and High Precise Modeling for System Optimization

Authors: Stephanie Chen, Mitja Echim, Christof Büskens

Abstract:

To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.

Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization

Procedia PDF Downloads 400
27413 Privacy Concerns and Law Enforcement Data Collection to Tackle Domestic and Sexual Violence

Authors: Francesca Radice

Abstract:

Domestic and sexual violence provokes, on average in Australia, one female death per week due to intimate violence behaviours. 83% of couples meet online, and intercepting domestic and sexual violence at this level would be beneficial. It has been observed that violent or coercive behaviour has been apparent from initial conversations on dating apps like Tinder. Child pornography, stalking, and coercive control are some criminal offences from dating apps, including women murdered after finding partners through Tinder. Police databases and predictive policing are novel approaches taken to prevent crime before harm is done. This research will investigate how police databases can be used in a privacy-preserving way to characterise users in terms of their potential for violent crime. Using the COPS database of NSW Police, we will explore how the past criminal record can be interpreted to yield a category of potential danger for each dating app user. It is up to the judgement of each subscriber on what degree of the potential danger they are prepared to enter into. Sentiment analysis is an area where research into natural language processing has made great progress over the last decade. This research will investigate how sentiment analysis can be used to interpret interchanges between dating app users to detect manipulative or coercive sentiments. These can be used to alert law enforcement if continued for a defined number of communications. One of the potential problems of this approach is the potential prejudice a categorisation can cause. Another drawback is the possibility of misinterpreting communications and involving law enforcement without reason. The approach will be thoroughly tested with cross-checks by human readers who verify both the level of danger predicted by the interpretation of the criminal record and the sentiment detected from personal messages. Even if only a few violent crimes can be prevented, the approach will have a tangible value for real people.

Keywords: sentiment analysis, data mining, predictive policing, virtual manipulation

Procedia PDF Downloads 75
27412 The Perspective on Data Collection Instruments for Younger Learners

Authors: Hatice Kübra Koç

Abstract:

For academia, collecting reliable and valid data is one of the most significant issues for researchers. However, it is not the same procedure for all different target groups; meanwhile, during data collection from teenagers, young adults, or adults, researchers can use common data collection tools such as questionnaires, interviews, and semi-structured interviews; yet, for young learners and very young ones, these reliable and valid data collection tools cannot be easily designed or applied by the researchers. In this study, firstly, common data collection tools are examined for ‘very young’ and ‘young learners’ participant groups since it is thought that the quality and efficiency of an academic study is mainly based on its valid and correct data collection and data analysis procedure. Secondly, two different data collection instruments for very young and young learners are stated as discussing the efficacy of them. Finally, a suggested data collection tool – a performance-based questionnaire- which is specifically developed for ‘very young’ and ‘young learners’ participant groups in the field of teaching English to young learners as a foreign language is presented in this current study. The designing procedure and suggested items/factors for the suggested data collection tool are accordingly revealed at the end of the study to help researchers have studied with young and very learners.

Keywords: data collection instruments, performance-based questionnaire, young learners, very young learners

Procedia PDF Downloads 84
27411 Impact of Minimalism in Dance Education on the Development of Aesthetic Sensibilities

Authors: Meghamala Nugehally

Abstract:

This paper hypothesises and draws inferences on the impact of minimalism in dance education on the development of artistic and aesthetic sensibilities in individuals in the age group of 5-18 yrs of age. This research and conclusions are within the context of Indian Classical Dance, which is based on Indian theories of aesthetics drawn from the Natyashastra, an ancient treatise on Indian dance and drama. The research employs training methods handed down through a strict one-on-one teacher-student tradition known as the Guru-Shishya Parampara. Aesthetic principles used are defined, and basic theories from the Natyashastra are explained to provide background for the research design. The paper also discusses dance curriculum design and training methodology design within the context of these aesthetic theories. The scope of the research is limited to two genres of Indian classical forms: Bharatanatyam and Odissi. A brief description of these dance forms is given as background and dance aesthetics specific to these forms are described. The research design includes individual case studies of subjects studied, independent predetermined attributes for observations and a qualitative scoring methodology devised for the purpose of the study. The study describes the training techniques used and contrasts minimal solo training techniques with the more elaborate group training techniques. Study groups were divided and the basis for the division are discussed. Study observations are recorded and presented as evidences. The results inform the conclusion and set the stage for further research in this area.

Keywords: dance aesthetics, dance education, Indian classical dance, minimalism

Procedia PDF Downloads 222
27410 Assessing the Theoretical Suitability of Sentinel-2 and Worldview-3 Data for Hydrocarbon Mapping of Spill Events, Using Hydrocarbon Spectral Slope Model

Authors: K. Tunde Olagunju, C. Scott Allen, Freek Van Der Meer

Abstract:

Identification of hydrocarbon oil in remote sensing images is often the first step in monitoring oil during spill events. Most remote sensing methods adopt techniques for hydrocarbon identification to achieve detection in order to model an appropriate cleanup program. Identification on optical sensors does not only allow for detection but also for characterization and quantification. Until recently, in optical remote sensing, quantification and characterization are only potentially possible using high-resolution laboratory and airborne imaging spectrometers (hyperspectral data). Unlike multispectral, hyperspectral data are not freely available, as this data category is mainly obtained via airborne survey at present. In this research, two (2) operational high-resolution multispectral satellites (WorldView-3 and Sentinel-2) are theoretically assessed for their suitability for hydrocarbon characterization, using the hydrocarbon spectral slope model (HYSS). This method utilized the two most persistent hydrocarbon diagnostic/absorption features at 1.73 µm and 2.30 µm for hydrocarbon mapping on multispectral data. In this research, spectra measurement of seven (7) different hydrocarbon oils (crude and refined oil) taken on ten (10) different substrates with the use of laboratory ASD Fieldspec were convolved to Sentinel-2 and WorldView-3 resolution, using their full width half maximum (FWHM) parameter. The resulting hydrocarbon slope values obtained from the studied samples enable clear qualitative discrimination of most hydrocarbons, despite the presence of different background substrates, particularly on WorldView-3. Due to close conformity of central wavelengths and narrow bandwidths to key hydrocarbon bands used in HYSS, the statistical significance for qualitative analysis on WorldView-3 sensors for all studied hydrocarbon oil returned with 95% confidence level (P-value ˂ 0.01), except for Diesel. Using multifactor analysis of variance (MANOVA), the discriminating power of HYSS is statistically significant for most hydrocarbon-substrate combinations on Sentinel-2 and WorldView-3 FWHM, revealing the potential of these two operational multispectral sensors as rapid response tools for hydrocarbon mapping. One notable exception is highly transmissive hydrocarbons on Sentinel-2 data due to the non-conformity of spectral bands with key hydrocarbon absorptions and the relatively coarse bandwidth (> 100 nm).

Keywords: hydrocarbon, oil spill, remote sensing, hyperspectral, multispectral, hydrocarbon-substrate combination, Sentinel-2, WorldView-3

Procedia PDF Downloads 212
27409 Hybrid Beam-Forming Techniques for 6G Terahertz Communication: Challenges

Authors: Mridula Korde

Abstract:

The terahertz band is the main pillar of 6G wireless communication system. It is difficult to meet the high data rate of 1Tbps by millimeter frequency support systems. The terahertz band suffers huge propagation loss limiting wireless distance. Terahertz band imposes ultra massive multiple input multiple output antenna (UM-MIMO) systems which produce high array gain with narrow beamforming. The conventional methods for MIMO beamforming are Analog and Digital beamforming. The fully digital beamforming methods utilize dedicated structure of DAC/ADC and RF chains. These structures increase hardware complexity and are power hungry. The analog beamforming structures utilize ADC/DAC with phase shifters with less hardware complexity but support less data rates. As a result, a hybrid beamforming method can be adapted for UM-MIMO systems. This paper will investigate challenges in hybrid beamforming architecture which will address the low spatial degrees of freedom (SDoF) limitation in Terahertz (THz) Communication. The flexible hardware connections are proposed, in order to switch the system in an adaptive manner so as to minimize the power requirements.

Keywords: 6G, terahertz communication, beamforming, challenges

Procedia PDF Downloads 26
27408 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: alignment, RNA secondary structure, pairwise, component-based, data mining

Procedia PDF Downloads 452
27407 A Case Study of Physical and Psychological Forces in the Nigerian Criminal and Military Interrogations

Authors: Onimisi Ekuh Abdullahi, Lasbat Omoshalewa Akinsemoyin

Abstract:

In Nigeria, over two decades now, there has been a steady increase in the insecurity of human lives and physical properties. In the South-South Nigeria, there is an acute insecurity of militants destroying oil pipe-lines and kidnapping cases; in the Middle-Belt zone, insecurity centers on kidnapping and in a few states crises between Herdsmen and Farmers range like wildfire; in the South-Western zone, kidnapping is vile, in the North-East zone the issue of Boko Haram has become World-wide concern, and in North-west zone, cattle rustlers and religious crisis are of great concern. At the initial stage, the Nigerian Police Force was called upon to quell the crisis. It soon became obvious that the dimension of the crisis was beyond police force. The Nigerian Armed Forces were called to maintain peace and order because the magnitude of the crisis was threatening the national unity and cohesion. The main objective of this paper, was to examine the investigative techniques of criminal by the military in Nigeria. Specifically to examine the physical and psychological force; the abusive techniques and tactics; and suggest modern psychological techniques of interrogating criminals accepted to Human Right Activists and the rule of law. The process is to create room behaviour and practices that carefully monitored the trust and reliability of admissions produced by Psychological manipulative process in Nigeria.

Keywords: military, Nigerian criminal, physical, psychological force

Procedia PDF Downloads 152
27406 Application of Latent Class Analysis and Self-Organizing Maps for the Prediction of Treatment Outcomes for Chronic Fatigue Syndrome

Authors: Ben Clapperton, Daniel Stahl, Kimberley Goldsmith, Trudie Chalder

Abstract:

Chronic fatigue syndrome (CFS) is a condition characterised by chronic disabling fatigue and other symptoms that currently can't be explained by any underlying medical condition. Although clinical trials support the effectiveness of cognitive behaviour therapy (CBT), the success rate for individual patients is modest. Patients vary in their response and little is known which factors predict or moderate treatment outcomes. The aim of the project is to develop a prediction model from baseline characteristics of patients, such as demographics, clinical and psychological variables, which may predict likely treatment outcome and provide guidance for clinical decision making and help clinicians to recommend the best treatment. The project is aimed at identifying subgroups of patients with similar baseline characteristics that are predictive of treatment effects using modern cluster analyses and data mining machine learning algorithms. The characteristics of these groups will then be used to inform the types of individuals who benefit from a specific treatment. In addition, results will provide a better understanding of for whom the treatment works. The suitability of different clustering methods to identify subgroups and their response to different treatments of CFS patients is compared.

Keywords: chronic fatigue syndrome, latent class analysis, prediction modelling, self-organizing maps

Procedia PDF Downloads 221
27405 Perceptions on Development of the Deaf in Higher Education Level: The Case of Special Education Students in Tiaong, Quezon, Philippines

Authors: Ashley Venerable, Rosario Tatlonghari

Abstract:

This study identified how college deaf students of Bartimaeus Center for Alternative Learning in Tiaong, Quezon, Philippines view development using visual communication techniques and generating themes from responses. Complete enumeration was employed. Guided by Constructivist Theory of Perception, past experiences and stored information influenced perception. These themes of development emerged: social development; pleasant environment; interpersonal relationships; availability of resources; employment; infrastructure development; values; and peace and security. Using the National Economic and Development Authority development indicators, findings showed the deaf students’ views on development were similar from the mainstream views. Responses also became more meaningful through visual communication techniques.

Keywords: deaf, development, perception, development indicators, visual communication

Procedia PDF Downloads 422
27404 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 41
27403 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 91
27402 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 418
27401 Advancing Environmental Remediation Through the Production of Functional Porous Materials from Phosphorite Residue Tailings

Authors: Ali Mohammed Yimer, Ayalew Assen, Youssef Belmabkhout

Abstract:

Environmental remediation is a pressing global concern, necessitating innovative strategies to address the challenges posed by industrial waste and pollution. This study aims to advance environmental remediation by developing cutting-edge functional porous materials from phosphorite residue tailings. Phosphorite mining activities generate vast amounts of waste, which pose significant environmental risks due to their contaminants. The proposed approach involved transforming these phosphorite residue tailings into valuable porous materials through a series of physico-chemical processes including milling, acid-base leaching, designing or templating as well as formation processes. The key components of the tailings were extracted and processed to produce porous arrays with high surface area and porosity. These materials were engineered to possess specific properties suitable for environmental remediation applications, such as enhanced adsorption capacity and selectivity for target contaminants. The synthesized porous materials were thoroughly characterized using advanced analytical techniques (XRD, SEM-EDX, N2 sorption, TGA, FTIR) to assess their structural, morphological, and chemical properties. The performance of the materials in removing various pollutants, including heavy metals and organic compounds, were evaluated through batch adsorption experiments. Additionally, the potential for material regeneration and reusability was investigated to enhance the sustainability of the proposed remediation approach. The outdoors of this research holds significant promise for addressing the environmental challenges associated with phosphorite residue tailings. By valorizing these waste materials into porous materials with exceptional remediation capabilities, this study contributes to the development of sustainable and cost-effective solutions for environmental cleanup. Furthermore, the utilization of phosphorite residue tailings in this manner offers a potential avenue for the remediation of other contaminated sites, thereby fostering a circular economy approach to waste management.

Keywords: functional porous materials, phosphorite residue tailings, adsorption, environmental remediation, sustainable solutions

Procedia PDF Downloads 51
27400 Characterization of the Viscoelastic Behavior of Polymeric Composites

Authors: Abir Abdessalem, Sahbi Tamboura, J. Fitoussi, Hachmi Ben Daly, Abbas Tcharkhtchi

Abstract:

Dynamic mechanical analysis (DMA) is one of the most used experimental techniques to investigate the temperature and frequency dependence of the mechanical behavior of viscoelastic materials. The measured data are generally shifted by the application of the principle of the time– temperature superposition (TTS) to obtain the viscoelastic system’s master curve. The aim of this work is to show the methodology to define the horizontal shift factor to be applied to the storage modulus measured in order to indicate the validity of (TTS) principle for this material system. This principle was successfully used to determine the long-term properties of the Sheet Moulding Compound (SMC) composites.

Keywords: composite material, dynamic mechanical analysis, SMC composites, viscoelastic behavior, modeling

Procedia PDF Downloads 226
27399 Emerging Technology for Business Intelligence Applications

Authors: Hsien-Tsen Wang

Abstract:

Business Intelligence (BI) has long helped organizations make informed decisions based on data-driven insights and gain competitive advantages in the marketplace. In the past two decades, businesses witnessed not only the dramatically increasing volume and heterogeneity of business data but also the emergence of new technologies, such as Artificial Intelligence (AI), Semantic Web (SW), Cloud Computing, and Big Data. It is plausible that the convergence of these technologies would bring more value out of business data by establishing linked data frameworks and connecting in ways that enable advanced analytics and improved data utilization. In this paper, we first review and summarize current BI applications and methodology. Emerging technologies that can be integrated into BI applications are then discussed. Finally, we conclude with a proposed synergy framework that aims at achieving a more flexible, scalable, and intelligent BI solution.

Keywords: business intelligence, artificial intelligence, semantic web, big data, cloud computing

Procedia PDF Downloads 90
27398 Characterization of Hyaluronic Acid-Based Injections Used on Rejuvenation Skin Treatments

Authors: Lucas Kurth de Azambuja, Loise Silveira da Silva, Gean Vitor Salmoria, Darlan Dallacosta, Carlos Rodrigo de Mello Roesler

Abstract:

This work provides a physicochemical and thermal characterization assessment of three different hyaluronic acid (HA)-based injections used for rejuvenation skin treatments. The three products analyzed are manufactured by the same manufacturer and commercialized for application on different skin levels. According to the manufacturer, all three HA-based injections are crosslinked and have a concentration of 23 mg/mL of HA, and 0.3% of lidocaine. Samples were characterized by Fourier-transformed infrared (FTIR), differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), and scanning electron microscope (SEM) techniques. FTIR analysis resulted in a similar spectrum when comparing the different products. DSC analysis demonstrated that the fusion points differ in each product, with a higher fusion temperature observed in specimen A, which is used for subcutaneous applications, when compared with B and C, which are used for the middle dermis and deep dermis, respectively. TGA data demonstrated a considerable mass loss at 100°C, which means that the product has more than 50% of water in its composition. TGA analysis also showed that Specimen A had a lower mass loss at 100°C when compared to Specimen C. A mass loss of around 220°C was observed on all samples, characterizing the presence of hyaluronic acid. SEM images displayed a similar structure on all samples analyzed, with a thicker layer for Specimen A when compared with B and C. This series of analyses demonstrated that, as expected, the physicochemical and thermal properties of the products differ according to their application. Furthermore, to better characterize the crosslinking degree of each product and their mechanical properties, a set of different techniques should be applied in parallel to correlate the results and, thereby, relate injection application with material properties.

Keywords: hyaluronic acid, characterization, soft-tissue fillers, injectable gels

Procedia PDF Downloads 84
27397 Understanding and Improving Neural Network Weight Initialization

Authors: Diego Aguirre, Olac Fuentes

Abstract:

In this paper, we present a taxonomy of weight initialization schemes used in deep learning. We survey the most representative techniques in each class and compare them in terms of overhead cost, convergence rate, and applicability. We also introduce a new weight initialization scheme. In this technique, we perform an initial feedforward pass through the network using an initialization mini-batch. Using statistics obtained from this pass, we initialize the weights of the network, so the following properties are met: 1) weight matrices are orthogonal; 2) ReLU layers produce a predetermined number of non-zero activations; 3) the output produced by each internal layer has a unit variance; 4) weights in the last layer are chosen to minimize the error in the initial mini-batch. We evaluate our method on three popular architectures, and a faster converge rates are achieved on the MNIST, CIFAR-10/100, and ImageNet datasets when compared to state-of-the-art initialization techniques.

Keywords: deep learning, image classification, supervised learning, weight initialization

Procedia PDF Downloads 126
27396 Spatially Distributed Rainfall Prediction Based on Automated Kriging for Landslide Early Warning Systems

Authors: Ekrem Canli, Thomas Glade

Abstract:

The precise prediction of rainfall in space and time is a key element to most landslide early warning systems. Unfortunately, the spatial variability of rainfall in many early warning applications is often disregarded. A common simplification is to use uniformly distributed rainfall to characterize aerial rainfall intensity. With spatially differentiated rainfall information, real-time comparison with rainfall thresholds or the implementation in process-based approaches might form the basis for improved landslide warnings. This study suggests an automated workflow from the hourly, web-based collection of rain gauge data to the generation of spatially differentiated rainfall predictions based on kriging. Because the application of kriging is usually a labor intensive task, a simplified and consequently automated variogram modeling procedure was applied to up-to-date rainfall data. The entire workflow was carried out purely with open source technology. Validation results, albeit promising, pointed out the challenges that are involved in pure distance based, automated geostatistical interpolation techniques for ever-changing environmental phenomena over short temporal and spatial extent.

Keywords: kriging, landslide early warning system, spatial rainfall prediction, variogram modelling, web scraping

Procedia PDF Downloads 277
27395 FPGA Implementation of a Marginalized Particle Filter for Delineation of P and T Waves of ECG Signal

Authors: Jugal Bhandari, K. Hari Priya

Abstract:

The ECG signal provides important clinical information which could be used to pretend the diseases related to heart. Accordingly, delineation of ECG signal is an important task. Whereas delineation of P and T waves is a complex task. This paper deals with the Study of ECG signal and analysis of signal by means of Verilog Design of efficient filters and MATLAB tool effectively. It includes generation and simulation of ECG signal, by means of real time ECG data, ECG signal filtering and processing by analysis of different algorithms and techniques. In this paper, we design a basic particle filter which generates a dynamic model depending on the present and past input samples and then produces the desired output. Afterwards, the output will be processed by MATLAB to get the actual shape and accurate values of the ranges of P-wave and T-wave of ECG signal. In this paper, Questasim is a tool of mentor graphics which is being used for simulation and functional verification. The same design is again verified using Xilinx ISE which will be also used for synthesis, mapping and bit file generation. Xilinx FPGA board will be used for implementation of system. The final results of FPGA shall be verified with ChipScope Pro where the output data can be observed.

Keywords: ECG, MATLAB, Bayesian filtering, particle filter, Verilog hardware descriptive language

Procedia PDF Downloads 362