Search results for: outputs
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 445

Search results for: outputs

175 Reconstruction of Visual Stimuli Using Stable Diffusion with Text Conditioning

Authors: ShyamKrishna Kirithivasan, Shreyas Battula, Aditi Soori, Richa Ramesh, Ramamoorthy Srinath

Abstract:

The human brain, among the most complex and mysterious aspects of the body, harbors vast potential for extensive exploration. Unraveling these enigmas, especially within neural perception and cognition, delves into the realm of neural decoding. Harnessing advancements in generative AI, particularly in Visual Computing, seeks to elucidate how the brain comprehends visual stimuli observed by humans. The paper endeavors to reconstruct human-perceived visual stimuli using Functional Magnetic Resonance Imaging (fMRI). This fMRI data is then processed through pre-trained deep-learning models to recreate the stimuli. Introducing a new architecture named LatentNeuroNet, the aim is to achieve the utmost semantic fidelity in stimuli reconstruction. The approach employs a Latent Diffusion Model (LDM) - Stable Diffusion v1.5, emphasizing semantic accuracy and generating superior quality outputs. This addresses the limitations of prior methods, such as GANs, known for poor semantic performance and inherent instability. Text conditioning within the LDM's denoising process is handled by extracting text from the brain's ventral visual cortex region. This extracted text undergoes processing through a Bootstrapping Language-Image Pre-training (BLIP) encoder before it is injected into the denoising process. In conclusion, a successful architecture is developed that reconstructs the visual stimuli perceived and finally, this research provides us with enough evidence to identify the most influential regions of the brain responsible for cognition and perception.

Keywords: BLIP, fMRI, latent diffusion model, neural perception.

Procedia PDF Downloads 43
174 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning

Authors: Joseph George, Anne Kotteswara Roa

Abstract:

Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.

Keywords: skin cancer, deep learning, performance measures, accuracy, datasets

Procedia PDF Downloads 97
173 Micro-Meso 3D FE Damage Modelling of Woven Carbon Fibre Reinforced Plastic Composite under Quasi-Static Bending

Authors: Aamir Mubashar, Ibrahim Fiaz

Abstract:

This research presents a three-dimensional finite element modelling strategy to simulate damage in a quasi-static three-point bending analysis of woven twill 2/2 type carbon fibre reinforced plastic (CFRP) composite on a micro-meso level using cohesive zone modelling technique. A meso scale finite element model comprised of a number of plies was developed in the commercial finite element code Abaqus/explicit. The interfaces between the plies were explicitly modelled using cohesive zone elements to allow for debonding by crack initiation and propagation. Load-deflection response of the CRFP within the quasi-static range was obtained and compared with the data existing in the literature. This provided validation of the model at the global scale. The outputs resulting from the global model were then used to develop a simulation model capturing the micro-meso scale material features. The sub-model consisted of a refined mesh representative volume element (RVE) modelled in texgen software, which was later embedded with cohesive elements in the finite element software environment. The results obtained from the developed strategy were successful in predicting the overall load-deflection response and the damage in global and sub-model at the flexure limit of the specimen. Detailed analysis of the effects of the micro-scale features was carried out.

Keywords: woven composites, multi-scale modelling, cohesive zone, finite element model

Procedia PDF Downloads 113
172 The Scanning Vibrating Electrode Technique (SVET) as a Tool for Optimising a Printed Ni(OH)2 Electrode under Charge Conditions

Authors: C. F. Glover, J. Marinaccio, A. Barnes, I. Mabbett, G. Williams

Abstract:

The aim of the current study is to optimise formulations, in terms of charging efficiency, of a printed Ni(OH)2 precursor coating of a battery anode. Through the assessment of the current densities during charging, the efficiency of a range of formulations are compared. The Scanning vibrating electrode technique (SVET) is used extensively in the field of corrosion to measure area-averaged current densities of freely-corroding metal surfaces when fully immersed in electrolyte. Here, a Ni(OH)2 electrode is immersed in potassium hydroxide (30% w/v solution) electrolyte and charged using a range of applied currents. Samples are prepared whereby multiple coatings are applied to one substrate, separated by a non-conducting barrier, and charged using a constant current. With a known applied external current, electrode efficiencies can be calculated based on the current density outputs measured using SVET. When fully charged, a green Ni(OH)2 is oxidised to a black NiOOH surface. Distinct regions displaying high current density, and hence a faster oxidising reaction rate, are located using the SVET. This is confirmed by a darkening of the region upon transition to NiOOH. SVET is a highly effective tool for assessing homogeneity of electrodes during charge/discharge. This could prove particularly useful for electrodes where there are no visible surface appearance changes. Furthermore, a scanning Kelvin probe technique, traditionally used to assess underfilm delamination of organic coatings for the protection of metallic surfaces, is employed to study the change in phase of oxides, pre and post charging.

Keywords: battery, electrode, nickel hydroxide, SVET, printed

Procedia PDF Downloads 205
171 Production and Distribution Network Planning Optimization: A Case Study of Large Cement Company

Authors: Lokendra Kumar Devangan, Ajay Mishra

Abstract:

This paper describes the implementation of a large-scale SAS/OR model with significant pre-processing, scenario analysis, and post-processing work done using SAS. A large cement manufacturer with ten geographically distributed manufacturing plants for two variants of cement, around 400 warehouses serving as transshipment points, and several thousand distributor locations generating demand needed to optimize this multi-echelon, multi-modal transport supply chain separately for planning and allocation purposes. For monthly planning as well as daily allocation, the demand is deterministic. Rail and road networks connect any two points in this supply chain, creating tens of thousands of such connections. Constraints include the plant’s production capacity, transportation capacity, and rail wagon batch size constraints. Each demand point has a minimum and maximum for shipments received. Price varies at demand locations due to local factors. A large mixed integer programming model built using proc OPTMODEL decides production at plants, demand fulfilled at each location, and the shipment route to demand locations to maximize the profit contribution. Using base SAS, we did significant pre-processing of data and created inputs for the optimization. Using outputs generated by OPTMODEL and other processing completed using base SAS, we generated several reports that went into their enterprise system and created tables for easy consumption of the optimization results by operations.

Keywords: production planning, mixed integer optimization, network model, network optimization

Procedia PDF Downloads 34
170 Managerial Encouragement, Organizational Encouragement, and Resource Sufficiency and Its Effect on Creativity as Perceived by Architects in Metro Manila

Authors: Ferdinand de la Paz

Abstract:

In highly creative environments such as in the business of architecture, business models exhibit more focus on the traditional practice of mainstream design consultancy services as mandated and constrained by existing legislation. Architectural design firms, as business units belonging to the creative industries, have long been provoked to innovate not only in terms of their creative outputs but, more significantly, in the way they create and capture value from what they do. In the Philippines, there is still a dearth of studies exploring organizational creativity within the context of architectural firm practice, let alone across other creative industries. The study sought to determine the effects, measure the extent, and assess the relationships of managerial encouragement, organizational encouragement, and resource sufficiency on creativity as perceived by architects. A survey questionnaire was used to gather data from 100 respondents. The analysis was done using descriptive statistics, correlational, and causal-explanatory methods. The findings reveal that there is a weak positive relationship between Managerial Encouragement (ME), Organizational Encouragement (OE), and Sufficient Resources (SR) toward Creativity (C). The study also revealed that while Organizational Creativity and Sufficient Resources have significant effects on Creativity, Managerial Encouragement does not. It is recommended that future studies with a larger sample size be pursued among architects holding top management positions in architectural design firms to further validate the findings of this research. It is also highly recommended that the other stimulant scales in the KEYS framework be considered in future studies covering other locales to generate a better understanding of the architecture business landscape in the Philippines.

Keywords: managerial encouragement, organizational encouragement, resource sufficiency, organizational creativity, architecture firm practice, creative industries

Procedia PDF Downloads 67
169 Design and Integration of a Renewable Energy Based Polygeneration System with Desalination for an Industrial Plant

Authors: Lucero Luciano, Cesar Celis, Jose Ramos

Abstract:

Polygeneration improves energy efficiency and reduce both energy consumption and pollutant emissions compared to conventional generation technologies. A polygeneration system is a variation of a cogeneration one, in which more than two outputs, i.e., heat, power, cooling, water, energy or fuels, are accounted for. In particular, polygeneration systems integrating solar energy and water desalination represent promising technologies for energy production and water supply. They are therefore interesting options for coastal regions with a high solar potential, such as those located in southern Peru and northern Chile. Notice that most of the Peruvian and Chilean mining industry operations intensive in electricity and water consumption are located in these particular regions. Accordingly, this work focus on the design and integration of a polygeneration system producing industrial heating, cooling, electrical power and water for an industrial plant. The design procedure followed in this work involves integer linear programming modeling (MILP), operational planning and dynamic operating conditions. The technical and economic feasibility of integrating renewable energy technologies (photovoltaic and solar thermal, PV+CPS), thermal energy store, power and thermal exchange, absorption chillers, cogeneration heat engines and desalination technologies is particularly assessed. The polygeneration system integration carried out seek to minimize the system total annual cost subject to CO2 emissions restrictions. Particular economic aspects accounted for include investment, maintenance and operating costs.

Keywords: desalination, design and integration, polygeneration systems, renewable energy

Procedia PDF Downloads 98
168 Geographic Information System (GIS) for Structural Typology of Buildings

Authors: Néstor Iván Rojas, Wilson Medina Sierra

Abstract:

Managing spatial information is described through a Geographic Information System (GIS), for some neighborhoods in the city of Tunja, in relation to the structural typology of the buildings. The use of GIS provides tools that facilitate the capture, processing, analysis and dissemination of cartographic information, product quality evaluation of the classification of buildings. Allows the development of a method that unifies and standardizes processes information. The project aims to generate a geographic database that is useful to the entities responsible for planning and disaster prevention and care for vulnerable populations, also seeks to be a basis for seismic vulnerability studies that can contribute in a study of urban seismic microzonation. The methodology consists in capturing the plat including road naming, neighborhoods, blocks and buildings, to which were added as attributes, the product of the evaluation of each of the housing data such as the number of inhabitants and classification, year of construction, the predominant structural systems, the type of mezzanine board and state of favorability, the presence of geo-technical problems, the type of cover, the use of each building, damage to structural and non-structural elements . The above data are tabulated in a spreadsheet that includes cadastral number, through which are systematically included in the respective building that also has that attribute. Geo-referenced data base is obtained, from which graphical outputs are generated, producing thematic maps for each evaluated data, which clearly show the spatial distribution of the information obtained. Using GIS offers important advantages for spatial information management and facilitates consultation and update. Usefulness of the project is recognized as a basis for studies on issues of planning and prevention.

Keywords: microzonation, buildings, geo-processing, cadastral number

Procedia PDF Downloads 310
167 Social Ties and Integration of the Offenders

Authors: C. Chaillou

Abstract:

The dominant theoretical approaches in Criminology are interested in the phenomenon of delinquency from the question of the management of the risks incurred by the population. Thus, this research advocate prevention of this phenomenon by a tracking of early disorders in children. Treatments offered to rely on medical research (genetics and biology are cited as a reference) and assuming a high naturalization of delinquent behaviour. Programs that are offered also reduce to a recovery of the deviant behaviour, and rely readily on behavioral guidelines, with an educational grant. Public policy then rely on these programs to prevent unwanted behaviour within a given population and to reduce the risk for the company. This is the case in France, with national institutes making (juvenile) violence a public health problem. We consider that other approaches, issues of sociology, are more relevant to the treatment of offenders. These approaches are moving, not on its prevention, but from its inputs and its outputs. Several modalities of entries and exits of delinquency can find and analyze in terms of process. We assume that there is a dynamic inherent in the individual and it is important to take into account the environment of the offender. These different types of processes can illuminate from the derived work of the Psychoanalytical psychopathology and lead to more effective treatment of delinquent acts. Psychoanalytic concepts have enabled us to offer a new look means to treat delinquency, placing several types of relationship with the other and relating to the clinical structure and the uniqueness of the case, we have been able to enter subjective and unconscious logics at work in delinquent acts. This research has facilitated the reduction of these types of subjective responses and proposed others, opening to a reintegration of offenders in a social link them being more favourable and in a longer term.

Keywords: delinquency, insertion, social link, unconscious

Procedia PDF Downloads 352
166 Vulnerability of People to Climate Change: Influence of Methods and Computation Approaches on Assessment Outcomes

Authors: Adandé Belarmain Fandohan

Abstract:

Climate change has become a major concern globally, particularly in rural communities that have to find rapid coping solutions. Several vulnerability assessment approaches have been developed in the last decades. This comes along with a higher risk for different methods to result in different conclusions, thereby making comparisons difficult and decision-making non-consistent across areas. The effect of methods and computational approaches on estimates of people’s vulnerability was assessed using data collected from the Gambia. Twenty-four indicators reflecting vulnerability components: (exposure, sensitivity, and adaptive capacity) were selected for this purpose. Data were collected through household surveys and key informant interviews. One hundred and fifteen respondents were surveyed across six communities and two administrative districts. Results were compared over three computational approaches: the maximum value transformation normalization, the z-score transformation normalization, and simple averaging. Regardless of the approaches used, communities that have high exposure to climate change and extreme events were the most vulnerable. Furthermore, the vulnerability was strongly related to the socio-economic characteristics of farmers. The survey evidenced variability in vulnerability among communities and administrative districts. Comparing output across approaches, overall, people in the study area were found to be highly vulnerable using the simple average and maximum value transformation, whereas they were only moderately vulnerable using the z-score transformation approach. It is suggested that assessment approach-induced discrepancies be accounted for in international debates to harmonize/standardize assessment approaches to the end of making outputs comparable across regions. This will also likely increase the relevance of decision-making for adaptation policies.

Keywords: maximum value transformation, simple averaging, vulnerability assessment, West Africa, z-score transformation

Procedia PDF Downloads 79
165 Kýklos Dimensional Geometry: Entity Specific Core Measurement System

Authors: Steven D. P Moore

Abstract:

A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).

Keywords: Kyklos, geometry, measurement, celestial, dimension

Procedia PDF Downloads 145
164 [Keynote Talk]: Water Resources Vulnerability Assessment to Climate Change in a Semi-Arid Basin of South India

Authors: K. Shimola, M. Krishnaveni

Abstract:

This paper examines vulnerability assessment of water resources in a semi-arid basin using the 4-step approach. The vulnerability assessment framework is developed to study the water resources vulnerability which includes the creation of GIS-based vulnerability maps. These maps represent the spatial variability of the vulnerability index. This paper introduces the 4-step approach to assess vulnerability that incorporates a new set of indicators. The approach is demonstrated using a framework composed of a precipitation data for (1975–2010) period, temperature data for (1965–2010) period, hydrological model outputs and the water resources GIS data base. The vulnerability assessment is a function of three components such as exposure, sensitivity and adaptive capacity. The current water resources vulnerability is assessed using GIS based spatio-temporal information. Rainfall Coefficient of Variation, monsoon onset and end date, rainy days, seasonality indices, temperature are selected for the criterion ‘exposure’. Water yield, ground water recharge, evapotranspiration (ET) are selected for the criterion ‘sensitivity’. Type of irrigation and storage structures are selected for the criterion ‘Adaptive capacity’. These indicators were mapped and integrated in GIS environment using overlay analysis. The five sub-basins, namely Arjunanadhi, Kousiganadhi, Sindapalli-Uppodai and Vallampatti Odai, fall under medium vulnerability profile, which indicates that the basin is under moderate stress of water resources. The paper also explores prioritization of sub-basinwise adaptation strategies to climate change based on the vulnerability indices.

Keywords: adaptive capacity, exposure, overlay analysis, sensitivity, vulnerability

Procedia PDF Downloads 293
163 Land Suitability Scaling and Modeling for Assessing Crop Suitability in Some New Reclaimed Areas, Egypt

Authors: W. A. M. Abdel Kawy, Kh. M. Darwish

Abstract:

Adequate land use selection is an essential step towards achieving sustainable development. The main object of this study is to develop a new scale for land suitability system, which can be compatible with the local conditions. Furthermore, it aims to adapt the conventional land suitability systems to match the actual environmental status in term of soil types, climate and other conditions to evaluate land suitability for newly reclaimed areas. The new system suggests calculation of land suitability considering 20 factors affecting crop selection grouping into five categories; crop-agronomic, land management, development, environmental conditions and socio – economic status. Each factor is summed by each other to calculate the total points. The highest rating for each factor indicates the highest preference for the evaluated crop. The highest rated crops for each group are those with the highest points for the actual suitability. This study was conducted to assess the application efficiency of the new land suitability scale in recently reclaimed sites in Egypt. Moreover, 35 representative soil profiles were examined, and soil samples were subjected to some physical and chemical analysis. Actual and potential suitabilities were calculated by using the new land suitability scale. Finally, the obtained results confirmed the applicability of a new land suitability system to recommend the most promising crop rotation that can be applied in the study areas. The outputs of this research revealed that the integration of different aspects for modeling and adapting a proposed model provides an effective and flexible technique, which contribute to improve land suitability assessment for several crops to be more accurate and reliable.

Keywords: analytic hierarchy process, land suitability, multi-criteria analysis, new reclaimed areas, soil parameters

Procedia PDF Downloads 112
162 Impact of Lifelong-Learning Mindset on Career Success of the Accounting and Finance Professionals

Authors: R. W. A. V. A. Wijenayake, P. M. R. N. Fernando, S. Nilesh, M. D. G. M. S. Diddeniya, M. Weligodapola, P. Shamila

Abstract:

The study is designed to examine the impact of a lifelong learning mindset on the career success of accounting and finance professionals in the western province of Sri Lanka. The learning mindset impacts the career success of accounting and finance professionals. The main objective of this study is to identify how the lifelong-learning mindset impacts on the career success of accounting and finance professionals. The lifelong learning mindset is the desire to learn new things and curiosity, resilience, and strategic thinking are the selected constructs to measure the lifelong learning mindset. Career success refers to certain objectives and emotional measures of improvement in one’s work life. The related variables of career success are measured through the number of promotions that have been granted in his/her work life. Positivism is the research paradigm, and the deductive approach is involved as this study relies on testing an existing theory. To conduct the study, the accounting and finance professionals in the western province in Sri Lanka were selected because most reputed international and local companies and specifically, headquarters of most of the companies are in western province. The responses cannot be collected from the whole population. Therefore, this study used a simple random sampling method, and the sample size was 120. Therefore, to identify the impact, 5-point Likert scale is used to perform this quantitative data. Required data gathered through an online questionnaire and the final outputs of the study will offer certain important recommendations to several parties such as universities, undergraduates, companies, and the policymakers to improve, help mentally and financially and motivate the students and the employees to continue their studies without ceasing after completion of their degree.

Keywords: career success, curiosity, lifelong learning mindset, resilience, strategic thinking

Procedia PDF Downloads 61
161 Machine Learning Based Anomaly Detection in Hydraulic Units of Governors in Hydroelectric Power Plants

Authors: Mehmet Akif Bütüner, İlhan Koşalay

Abstract:

Hydroelectric power plants (HEPPs) are renewable energy power plants with the highest installed power in the world. While the control systems operating in these power plants ensure that the system operates at the desired operating point, it is also responsible for stopping the relevant unit safely in case of any malfunction. While these control systems are expected not to miss signals that require stopping, on the other hand, it is desired not to cause unnecessary stops. In traditional control systems including modern systems with SCADA infrastructure, alarm conditions to create warnings or trip conditions to put relevant unit out of service automatically are usually generated with predefined limits regardless of different operating conditions. This approach results in alarm/trip conditions to be less likely to detect minimal changes which may result in serious malfunction scenarios in near future. With the methods proposed in this research, routine behavior of the oil circulation of hydraulic governor of a HEPP will be modeled with machine learning methods using historical data obtained from SCADA system. Using the created model and recently gathered data from control system, oil pressure of hydraulic accumulators will be estimated. Comparison of this estimation with the measurements made and recorded instantly by the SCADA system will help to foresee failure before becoming worse and determine remaining useful life. By using model outputs, maintenance works will be made more planned, so that undesired stops are prevented, and in case of any malfunction, the system will be stopped or several alarms are triggered before the problem grows.

Keywords: hydroelectric, governor, anomaly detection, machine learning, regression

Procedia PDF Downloads 62
160 Simulation and Synoptic Investigation of a Severe Dust Storm in Urmia Lake in the Middle East

Authors: Nasim Hossein Hamzeh, Karim Shukurov, Abbas Ranjbar Saadat Abadi, Alaa Mhawish, Christian Opp

Abstract:

Deserts are the main dust sources in the world. Also, recently driedLake beds have caused environmental problems inthe surrounding areas in the world. In this study, the Urmia Lake was the source of dustfromApril 24 to April 25, 2017.The local dust storm was combined with another large-scale dust storm that originated from Saudi Arabia and Iraq 1-2 days earlier. Synoptic investigation revealed that the severe dust storm was made by a strong Black Sea cyclone and a low-pressure system over the Middle East and Central Iraq in conjunction a high-pressure system and associated with a high gradient contour and a quasi-stationary long-wave trough over the east and south of the Mediterranean Sea. Based on HYSPLIT 72 hours backward and forward trajectories, the most probable dust transport routes to and from the Urmia Lake region are estimated. Using the concentration weighted trajectory (CWT) method based on 24 hours backward and 24 hours forward trajectories, the spatial distributions of potential sources of PM10 observed in the Urmia Lake region on April 23-26, 2017. Also, the vertical profile of dust particles using the WRF-Chem model with two dust schemes showed dust ascending up to 5 km from the lake. Also, the dust schemes outputs shows that the PM10 fluctuating changes are 12 hours earlier than the measured surface PM10 at five air pollution monitoring stations around the Urmia Lake in 23-26 April 2017.

Keywords: dust storm, synoptic investigation, WRF-chem model, urmia lake, lagrangian trajectory

Procedia PDF Downloads 186
159 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 33
158 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 307
157 The Politics of Disruption: Disrupting Polity to Influence Policy in Nigeria

Authors: Okechukwu B. C. Nwankwo

Abstract:

The surge of social protests sweeping through the globe is a contemporary phenomenon. Yet the phenomenon in itself is not new. Thus, various scholars have over the years developed conceptual frameworks for evaluating it. Adopting and adapting some of these frameworks this paper begins from a purely theoretical perspective exploring the concept and content of social protest within the specific context of Nigeria. It proceeds to build a typology of the phenomenon in terms of form, actors, origin, character, organisation, goal, dynamics, outcome and a whole lot of other variables that are context relevant for evaluating it in an operationally useful manner. The centrality of the context in which protest evolves is demonstrated. Adopting Easton’s systems theory, the paper builds on the assumption that protests emerge whenever and wherever political institutions and structures prove unable or unwilling to transform inputs in form of basic demands into outputs in form of responsive policies. It argues that protests in Nigeria are simply the crystallisation of opposition in the streets. Protests are thus extra-institutional politics. This is usually the case, as elsewhere, where there is no functional institutionalised opposition. Noting that protest, disruptive or otherwise, is an influence strategy, it argues that every single protest is a new opportunity for reform, for reorganisation of state capacities, for modifying rights and obligation of citizens and government to each other. Each reform outcome is, however, only a temporal antecedent. Its extensity gives signal for the next similar protest event. Through providing evidence on how protests in Nigeria create opportunity for reform, for more accountable, more effective governance, the paper shows the positive impact of protests and its importance even in the consolidation effort for the nation’s nascent democracy. Data on protest events will be based on media reports, especially print media.

Keywords: democracy, dialectics, social protest, reform

Procedia PDF Downloads 107
156 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 56
155 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment

Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee

Abstract:

Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.

Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation

Procedia PDF Downloads 324
154 Power Energy Management For A Grid-Connected PV System Using Rule-Base Fuzzy Logic

Authors: Nousheen Hashmi, Shoab Ahmad Khan

Abstract:

Active collaboration among the green energy sources and the load demand leads to serious issues related to power quality and stability. The growing number of green energy resources and Distributed-Generators need newer strategies to be incorporated for their operations to keep the power energy stability among green energy resources and micro-grid/Utility Grid. This paper presents a novel technique for energy power management in Grid-Connected Photovoltaic with energy storage system under set of constraints including weather conditions, Load Shedding Hours, Peak pricing Hours by using rule-based fuzzy smart grid controller to schedule power coming from multiple Power sources (photovoltaic, grid, battery) under the above set of constraints. The technique fuzzifies all the inputs and establishes fuzzify rule set from fuzzy outputs before defuzzification. Simulations are run for 24 hours period and rule base power scheduler is developed. The proposed fuzzy controller control strategy is able to sense the continuous fluctuations in Photovoltaic power generation, Load Demands, Grid (load Shedding patterns) and Battery State of Charge in order to make correct and quick decisions.The suggested Fuzzy Rule-based scheduler can operate well with vague inputs thus doesn’t not require any exact numerical model and can handle nonlinearity. This technique provides a framework for the extension to handle multiple special cases for optimized working of the system.

Keywords: photovoltaic, power, fuzzy logic, distributed generators, state of charge, load shedding, membership functions

Procedia PDF Downloads 456
153 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 61
152 Exploring SL Writing and SL Sensitivity during Writing Tasks: Poor and Advanced Writing in a Context of Second Language other than English

Authors: Sandra Figueiredo, Margarida Alves Martins, Carlos Silva, Cristina Simões

Abstract:

This study integrates a larger research empirical project that examines second language (SL) learners’ profiles and valid procedures to perform complete and diagnostic assessment in schools. 102 learners of Portuguese as a SL aged 7 and 17 years speakers of distinct home languages were assessed in several linguistic tasks. In this article, we focused on writing performance in the specific task of narrative essay composition. The written outputs were measured using the score in six components adapted from an English SL assessment context (Alberta Education): linguistic vocabulary, grammar, syntax, strategy, socio-linguistic, and discourse. The writing processes and strategies in Portuguese language used by different immigrant students were analysed to determine features and diversity of deficits on authentic texts performed by SL writers. Differentiated performance was based on the diversity of the following variables: grades, previous schooling, home language, instruction in first language, and exposure to Portuguese as Second Language. Indo-Aryan languages speakers showed low writing scores compared to their peers and the type of language and respective cognitive mapping (such as Mandarin and Arabic) was the predictor, not linguistic distance. Home language instruction should also be prominently considered in further research to understand specificities of cognitive academic profile in a Romance languages learning context. Additionally, this study also examined the teachers representations that will be here addressed to understand educational implications of second language teaching in psychological distress of different minorities in schools of specific host countries.

Keywords: home language, immigrant students, Portuguese language, second language, writing assessment

Procedia PDF Downloads 436
151 Co-Operation in Hungarian Agriculture

Authors: Eszter Hamza

Abstract:

The competitiveness of economic operators is based on interoperability, which is relatively low in Hungary. The development of co-operation is high priority in Common Agricultural Policy 2014-2020. The aim of the paper to assess co-operations in Hungarian agriculture, estimate the economic outputs and benefits of co-operations, based on statistical data processing and literature. Further objective is to explore the potential of agricultural co-operation with the help of interviews and questionnaire survey. The research seeks to answer questions as to what fundamental factors play role in the development of co-operation, and what are the motivations of the actors and the key success factors and pitfalls. The results were analysed using econometric methods. In Hungarian agriculture we can find several forms of co-operation: cooperatives, producer groups (PG) and producer organizations (PO), machinery cooperatives, integrator companies, product boards and interbranch organisations. Despite the several appearance of the agricultural co-operation, their economic weight is significantly lower in Hungary than in western European countries. Considering the agricultural importance, the integrator companies represent the most weight among the co-operations forms. Hungarian farmers linked to co-operations or organizations mostly in relation to procurement and sales. Less than 30 percent of surveyed farmers are members of a producer organization or cooperative. The trust level is low among farmers. The main obstacle to the development of formalized co-operation, is producers' risk aversion and the black economy in agriculture. Producers often prefer informal co-operation instead of long-term contractual relationships. The Hungarian agricultural co-operations are characterized by non-dynamic development, but slow qualitative change. For the future, one breakout point could be the association of producer groups and organizations, which in addition to the benefits of market concentration, in the dissemination of knowledge, advisory network operation and innovation can act more effectively.

Keywords: agriculture, co-operation, producer organisation, trust level

Procedia PDF Downloads 364
150 Optimisation of Metrological Inspection of a Developmental Aeroengine Disc

Authors: Suneel Kumar, Nanda Kumar J. Sreelal Sreedhar, Suchibrata Sen, V. Muralidharan,

Abstract:

Fan technology is very critical and crucial for any aero engine technology. The fan disc forms a critical part of the fan module. It is an airworthiness requirement to have a metrological qualified quality disc. The current study uses a tactile probing and scanning on an articulated measuring machine (AMM), a bridge type coordinate measuring machine (CMM) and Metrology software for intermediate and final dimensional and geometrical verification during the prototype development of the disc manufactured through forging and machining process. The circumferential dovetails manufactured through the milling process are evaluated based on the evaluated and analysed metrological process. To perform metrological optimization a change of philosophy is needed making quality measurements available as fast as possible to improve process knowledge and accelerate the process but with accuracy, precise and traceable measurements. The offline CMM programming for inspection and optimisation of the CMM inspection plan are crucial portions of the study and discussed. The dimensional measurement plan as per the ASME B 89.7.2 standard to reach an optimised CMM measurement plan and strategy are an important requirement. The probing strategy, stylus configuration, and approximation strategy effects on the measurements of circumferential dovetail measurements of the developmental prototype disc are discussed. The results were discussed in the form of enhancement of the R &R (repeatability and reproducibility) values with uncertainty levels within the desired limits. The findings from the measurement strategy adopted for disc dovetail evaluation and inspection time optimisation are discussed with the help of various analyses and graphical outputs obtained from the verification process.

Keywords: coordinate measuring machine, CMM, aero engine, articulated measuring machine, fan disc

Procedia PDF Downloads 85
149 Empirical Study of Correlation between the Cost Performance Index Stability and the Project Cost Forecast Accuracy in Construction Projects

Authors: Amin AminiKhafri, James M. Dawson-Edwards, Ryan M. Simpson, Simaan M. AbouRizk

Abstract:

Earned value management (EVM) has been introduced as an integrated method to combine schedule, budget, and work breakdown structure (WBS). EVM provides various indices to demonstrate project performance including the cost performance index (CPI). CPI is also used to forecast final project cost at completion based on the cost performance during the project execution. Knowing the final project cost during execution can initiate corrective actions, which can enhance project outputs. CPI, however, is not constant during the project, and calculating the final project cost using a variable index is an inaccurate and challenging task for practitioners. Since CPI is based on the cumulative progress values and because of the learning curve effect, CPI variation dampens and stabilizes as project progress. Although various definitions for the CPI stability have been proposed in literature, many scholars have agreed upon the definition that considers a project as stable if the CPI at 20% completion varies less than 0.1 from the final CPI. While 20% completion point is recognized as the stability point for military development projects, construction projects stability have not been studied. In the current study, an empirical study was first conducted using construction project data to determine the stability point for construction projects. Early findings have demonstrated that a majority of construction projects stabilize towards completion (i.e., after 70% completion point). To investigate the effect of CPI stability on cost forecast accuracy, the correlation between CPI stability and project cost at completion forecast accuracy was also investigated. It was determined that as projects progress closer towards completion, variation of the CPI decreases and final project cost forecast accuracy increases. Most projects were found to have 90% accuracy in the final cost forecast at 70% completion point, which is inlined with findings from the CPI stability findings. It can be concluded that early stabilization of the project CPI results in more accurate cost at completion forecasts.

Keywords: cost performance index, earned value management, empirical study, final project cost

Procedia PDF Downloads 131
148 Exploration of Industrial Symbiosis Opportunities with an Energy Perspective

Authors: Selman Cagman

Abstract:

A detailed analysis is made within an organized industrial zone (OIZ) that has 1165 production facilities such as manufacturing of furniture, fabricated metal products (machinery and equipment), food products, plastic and rubber products, machinery and equipment, non-metallic mineral products, electrical equipment, textile products, and manufacture of wood and cork products. In this OIZ, a field study is done by choosing some facilities that can represent the whole OIZ sectoral distribution. In this manner, there are 207 facilities included to the site visit, and there is a 17 questioned survey carried out with each of them to assess their inputs, outputs, and waste amounts during manufacturing processes. The survey result identify that MDF/Particleboard and chipboard particles, textile, food, foam rubber, sludge (treatment sludge, phosphate-paint sludge, etc.), plastic, paper and packaging, scrap metal (aluminum shavings, steel shavings, iron scrap, profile scrap, etc.), slag (coal slag), ceramic fracture, ash from the fluidized bed are the wastes come from these facilities. As a result, there are 5 industrial symbiosis projects established with this study. One of the projects is a 2.840 kW capacity Integrated Biomass Based Waste Incineration-Energy Production Facility running on 35.000 tons/year of MDF particles and chipboard waste. Another project is a biogas plant with 225 tons/year whey, 100 tons/year of sesame husk, 40 tons/year of burnt wafer dough, and 2.000 tons/year biscuit waste. These two plants investment costs and operational costs are given in detail. The payback time of the 2.840 kW plant is almost 4 years and the biogas plant is around 6 years.

Keywords: industrial symbiosis, energy, biogas, waste to incineration

Procedia PDF Downloads 82
147 Feasibility of Solar Distillation as Household Water Supply in Saline Zones of Bangladesh

Authors: Md. Rezaul Karim, Md. Ashikur Rahman, Dewan Mahmud Mim

Abstract:

Scarcity of potable water as the result of rapid climate change and saltwater intrusion in groundwater has been a major problem in the coastal regions over the world. In equinoctial countries like Bangladesh, where sunlight is available for more than 10 hours a day, Solar Distillation provides a promising sustainable way for safe drinking water supply in coastal poor households with negligible major cost and difficulty of construction and maintenance. In this paper, two passive type solar stills- a Conventional Single Slope Solar still (CSS) and a Pyramid Solar Sill (PSS) is used and relationship is established between distill water output corresponding to four different factors- temperature, solar intensity, relative humidity and wind speed for Gazipur, Bangladesh. Comparison is analyzed between the two different still outputs for nine months period (January- September) and efficiency is calculated. Later a thermal mathematical model is developed and the distilled water output for Khulna, Bangladesh is computed. Again, difference between the output of the two cities- Gazipur and Khulna is demonstrated and finally an economic analysis is prepared. The distillation output has a positive correlation with temperature and solar intensity, inverse relation with relative humidity and wind speed has nugatory consequence. The maximum output of Conventional Solar Still is obtained 3.8 L/m2/day and Pyramid still is 4.3 L/m2/day for Gazipur and almost 15% more efficiency is found for Pyramid still. Productivity in Khulna is found almost 20% more than Gazipur. Based on economic analysis, taking 10 BDT, per liter, the net profit, benefit cost ratio, payback period all indicates that both stills are feasible but pyramid still is more feasible than Conventional Still. Finally, for a 3-4 member family, area of 4 m2 is suggested for Conventional Still and 3m2 for Pyramid Solar Still.

Keywords: solar distillation, household water supply, saline zones, Bangladesh

Procedia PDF Downloads 249
146 Impact Assessment of Climate Change on Water Resources in the Kabul River Basin

Authors: Tayib Bromand, Keisuke Sato

Abstract:

This paper presents the introduction to current water balance and climate change assessment in the Kabul river basin. The historical and future impacts of climate change on different components of water resources and hydrology in the Kabul river basin. The eastern part of Afghanistan, the Kabul river basin was chosen due to rapid population growth and land degradation to quantify the potential influence of Gobal Climate Change on its hydrodynamic characteristics. Luck of observed meteorological data was the main limitation of present research, few existed precipitation stations in the plain area of Kabul basin selected to compare with TRMM precipitation records, the result has been evaluated satisfactory based on regression and normal ratio methods. So the TRMM daily precipitation and NCEP temperature data set applied in the SWAT model to evaluate water balance for 2008 to 2012. Middle of the twenty – first century (2064) selected as the target period to assess impacts of climate change on hydrology aspects in the Kabul river basin. For this purpose three emission scenarios, A2, A1B and B1 and four GCMs, such as MIROC 3.2 (Med), CGCM 3.1 (T47), GFDL-CM2.0 and CNRM-CM3 have been selected, to estimate the future initial conditions of the proposed model. The outputs of the model compared and calibrated based on (R2) satisfactory. The assessed hydrodynamic characteristics and precipitation pattern. The results show that there will be significant impacts on precipitation patter such as decreasing of snowfall in the mountainous area of the basin in the Winter season due to increasing of 2.9°C mean annual temperature and land degradation due to deforestation.

Keywords: climate change, emission scenarios, hydrological components, Kabul river basin, SWAT model

Procedia PDF Downloads 428