Search results for: size driven magnetic ordering
5164 Experimental Study on Granulated Steel Slag as an Alternative to River Sand
Authors: K. Raghu, M. N. Vathhsala, Naveen Aradya, Sharth
Abstract:
River sand is the most preferred fine aggregate for mortar and concrete. River sand is a product of natural weathering of rocks over a period of millions of years and is mined from river beds. Sand mining has disastrous environmental consequences. The excessive mining of river bed is creating an ecological imbalance. This has lead to have restrictions imposed by ministry of environment on sand mining. Driven by the acute need for sand, stone dust or manufactured sand prepared from the crushing and screening of coarse aggregate is being used as sand in the recent past. However manufactured sand is also a natural material and has quarrying and quality issues. To reduce the burden on the environment, alternative materials to be used as fine aggregates are being extensively investigated all over the world. Looking to the quantum of requirements, quality and properties there has been a global consensus on a material – Granulated slags. Granulated slag has been proven as a suitable material for replacing natural sand / crushed fine aggregates. In developed countries, the use of granulated slag as fine aggregate to replace natural sand is well established and is in regular practice. In the present paper Granulated slag has been experimented for usage in mortar. Slags are the main by-products generated during iron and steel production in the steel industry. Over the past decades, the steel production has increased and, consequently, the higher volumes of by-products and residues generated which have driven to the reuse of these materials in an increasingly efficient way. In recent years new technologies have been developed to improve the recovery rates of slags. Increase of slags recovery and use in different fields of applications like cement making, construction and fertilizers help in preserving natural resources. In addition to the environment protection, these practices produced economic benefits, by providing sustainable solutions that can allow the steel industry to achieve its ambitious targets of “zero waste” in coming years. Slags are generated at two different stages of steel production, iron making and steel making known as BF(Blast Furnace) slag and steel slag respectively. The slagging agent or fluxes, such as lime stone, dolomite and quartzite added into BF or steel making furnaces in order to remove impurities from ore, scrap and other ferrous charges during smelting. The slag formation is the result of a complex series of physical and chemical reactions between the non-metallic charge(lime stone, dolomite, fluxes), the energy sources(coal, coke, oxygen, etc.) and refractory materials. Because of the high temperatures (about 15000 C) during their generation, slags do not contain any organic substances. Due to the fact that slags are lighter than the liquid metal, they float and get easily removed. The slags protect the metal bath from atmosphere and maintain temperature through a kind of liquid formation. These slags are in liquid state and solidified in air after dumping in the pit or granulated by impinging water systems. Generally, BF slags are granulated and used in cement making due to its high cementious properties, and steel slags are mostly dumped due to unfavourable physio-chemical conditions. The increasing dump of steel slag not only occupies a plenty of land but also wastes resources and can potentially have an impact on the environment due to water pollution. Since BF slag contains little Fe and can be used directly. BF slag has found a wide application, such as cement production, road construction, Civil Engineering work, fertilizer production, landfill daily cover, soil reclamation, prior to its application outside the iron and steel making process.Keywords: steel slag, river sand, granulated slag, environmental
Procedia PDF Downloads 2445163 Group Sequential Covariate-Adjusted Response Adaptive Designs for Survival Outcomes
Authors: Yaxian Chen, Yeonhee Park
Abstract:
Driven by evolving FDA recommendations, modern clinical trials demand innovative designs that strike a balance between statistical rigor and ethical considerations. Covariate-adjusted response-adaptive (CARA) designs bridge this gap by utilizing patient attributes and responses to skew treatment allocation in favor of the treatment that is best for an individual patient’s profile. However, existing CARA designs for survival outcomes often hinge on specific parametric models, constraining their applicability in clinical practice. In this article, we address this limitation by introducing a CARA design for survival outcomes (CARAS) based on the Cox model and a variance estimator. This method addresses issues of model misspecification and enhances the flexibility of the design. We also propose a group sequential overlapweighted log-rank test to preserve type I error rate in the context of group sequential trials using extensive simulation studies to demonstrate the clinical benefit, statistical efficiency, and robustness to model misspecification of the proposed method compared to traditional randomized controlled trial designs and response-adaptive randomization designs.Keywords: cox model, log-rank test, optimal allocation ratio, overlap weight, survival outcome
Procedia PDF Downloads 645162 MHD Chemically Reacting Viscous Fluid Flow towards a Vertical Surface with Slip and Convective Boundary Conditions
Authors: Ibrahim Yakubu Seini, Oluwole Daniel Makinde
Abstract:
MHD chemically reacting viscous fluid flow towards a vertical surface with slip and convective boundary conditions has been conducted. The temperature and the chemical species concentration of the surface and the velocity of the external flow are assumed to vary linearly with the distance from the vertical surface. The governing differential equations are modeled and transformed into systems of ordinary differential equations, which are then solved numerically by a shooting method. The effects of various parameters on the heat and mass transfer characteristics are discussed. Graphical results are presented for the velocity, temperature, and concentration profiles whilst the skin-friction coefficient and the rate of heat and mass transfers near the surface are presented in tables and discussed. The results revealed that increasing the strength of the magnetic field increases the skin-friction coefficient and the rate of heat and mass transfers toward the surface. The velocity profiles are increased towards the surface due to the presence of the Lorenz force, which attracts the fluid particles near the surface. The rate of chemical reaction is seen to decrease the concentration boundary layer near the surface due to the destructive chemical reaction occurring near the surface.Keywords: boundary layer, surface slip, MHD flow, chemical reaction, heat transfer, mass transfer
Procedia PDF Downloads 5395161 A Comparative Study of the Techno-Economic Performance of the Linear Fresnel Reflector Using Direct and Indirect Steam Generation: A Case Study under High Direct Normal Irradiance
Authors: Ahmed Aljudaya, Derek Ingham, Lin Ma, Kevin Hughes, Mohammed Pourkashanian
Abstract:
Researchers, power companies, and state politicians have given concentrated solar power (CSP) much attention due to its capacity to generate large amounts of electricity whereas overcoming the intermittent nature of solar resources. The Linear Fresnel Reflector (LFR) is a well-known CSP technology type for being inexpensive, having a low land use factor, and suffering from low optical efficiency. The LFR was considered a cost-effective alternative option to the Parabolic Trough Collector (PTC) because of its simplistic design, and this often outweighs its lower efficiency. The LFR has been found to be a promising option for directly producing steam to a thermal cycle in order to generate low-cost electricity, but also it has been shown to be promising for indirect steam generation. The purpose of this important analysis is to compare the annual performance of the Direct Steam Generation (DSG) and Indirect Steam Generation (ISG) of LFR power plants using molten salt and other different Heat Transfer Fluids (HTF) to investigate their technical and economic effects. A 50 MWe solar-only system is examined as a case study for both steam production methods in extreme weather conditions. In addition, a parametric analysis is carried out to determine the optimal solar field size that provides the lowest Levelized Cost of Electricity (LCOE) while achieving the highest technical performance. As a result of optimizing the optimum solar field size, the solar multiple (SM) is found to be between 1.2 – 1.5 in order to achieve as low as 9 Cent/KWh for the direct steam generation of the linear Fresnel reflector. In addition, the power plant is capable of producing around 141 GWh annually and up to 36% of the capacity factor, whereas the ISG produces less energy at a higher cost. The optimization results show that the DSG’s performance overcomes the ISG in producing around 3% more annual energy, 2% lower LCOE, and 28% less capital cost.Keywords: concentrated solar power, levelized cost of electricity, linear Fresnel reflectors, steam generation
Procedia PDF Downloads 1115160 Application of Applied Behavior Analysis Treatment to Children with Down Syndrome
Authors: Olha Yarova
Abstract:
This study is a collaborative project between the American University of Central Asia and parent association of children with Down syndrome ‘Sunterra’ that took place in Bishkek, Kyrgyzstan. The purpose of the study was to explore whether principles and techniques of applied behavior analysis (ABA) could be used to teach children with Down syndrome socially significant behaviors. ABA is considered to be one of the most effective treatment for children with autism, but little research is done on the particularity of using ABA to children with Down syndrome. The data for the study was received during clinical observations; work with children with Down syndrome and interviews with their mothers. The results show that many ABA principles make the work with children with Down syndrome more effective. Although such children very rarely demonstrate aggressive behavior, they show a lot of escape-driven and attention seeking behaviors that are reinforced by their parents and educators. Thus functional assessment can be done to assess the function of problem behavior and to determine appropriate treatment. Prompting and prompting fading should be used to develop receptive and expressive language skills, and enhance motor development. Even though many children with Down syndrome work for praise, it is still relevant to use tangible reinforcement and to know how to remove them. Based on the results of the study, the training for parents of children with Down syndrome will be developed in Kyrgyzstan, country, where children with Down syndrome are not accepted to regular kindergartens and where doctors in maternity hospitals tell parents that their child will never talk, walk and recognize themKeywords: down syndrome, applied behavior analysis, functional assessment, problem behavior, reinforcement
Procedia PDF Downloads 2755159 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites
Authors: Mohammad M. Khan, Pankaj Agarwal
Abstract:
The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM
Procedia PDF Downloads 1505158 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems
Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano
Abstract:
The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring
Procedia PDF Downloads 1515157 Copper (II) Complex of New Tetradentate Asymmetrical Schiff Base Ligand: Synthesis, Characterization, and Catecholase-Mimetic Activity
Authors: Cahit Demetgul, Sahin Bayraktar, Neslihan Beyazit
Abstract:
Metalloenzymes are enzyme proteins containing metal ions, which are directly bound to the protein or to enzyme-bound nonprotein components. One of the major metalloenzymes that play a key role in oxidation reactions is catechol oxidase, which shows catecholase activity i.e. oxidation of a broad range of catechols to quinones through the four-electron reduction of molecular oxygen to water. Studies on the model compounds mimicking the catecholase activity are very useful and promising for the development of new, more efficient bioinspired catalysts, for in vitro oxidation reactions. In this study, a new tetradentate asymmetrical Schiff-base and its Cu(II) complex were synthesized by condensation of 4-nitro-1,2-phenylenediamine with 6-formyl-7-hydroxy-5-methoxy-2-methylbenzopyran-4-one and by using an appropriate Cu(II) salt, respectively. The prepared compounds were characterized by elemental analysis, FT-IR, NMR, UV-Vis and magnetic susceptibility. The catecholase-mimicking activity of the new Schiff Base Cu(II) complex was performed for the oxidation of 3,5-di-tert-butylcatechol (3,5-DTBC) in methanol at 25 °C, where the electronic spectra were recorded at different time intervals. The yield of the quinone (3,5-DTBQ) was determined from the measured absorbance at 400 nm of the resulting solution. The compatibility of catalytic reaction with Michaelis-Menten kinetics was also investigated. In conclusion, we have found that our new Schiff Base Cu(II) complex presents a significant capacity to catalyze the oxidation reaction of the catechol to o-quinone.Keywords: catecholase activity, Michaelis-Menten kinetics, Schiff base, transition metals
Procedia PDF Downloads 3105156 Effect of the Binary and Ternary Exchanges on Crystallinity and Textural Properties of X Zeolites
Authors: H. Hammoudi, S. Bendenia, K. Marouf-Khelifa, R. Marouf, J. Schott, A. Khelifa
Abstract:
The ionic exchange of the NaX zeolite by Cu2+ and/or Zn2+ cations is progressively driven while following the development of some of its characteristic: crystallinity by XR diffraction, profile of isotherms, RI criterion, isosteric adsorption heat and microporous volume using both the Dubinin–Radushkevich (DR) equation and the t-plot through the Lippens–de Boer method which also makes it possible to determine the external surface area. Results show that the cationic exchange process, in the case of Cu2+ introduced at higher degree, is accompanied by crystalline degradation for Cu(x)X, in contrast to Zn2+-exchanged zeolite X. This degradation occurs without significant presence of mesopores, because the RI criterion values were found to be much lower than 2.2. A comparison between the binary and ternary exchanges shows that the curves of CuZn(x)X are clearly below those of Zn(x)X and Cu(x)X, whatever the examined parameter. On the other hand, the curves relating to CuZn(x)X tend towards those of Cu(x)X. This would again confirm the sensitivity of the crystalline structure of CuZn(x)X with respect to the introduction of Cu2+ cations. An original result is the distortion of the zeolitic framework of X zeolites at middle exchange degree, when Cu2+ competes with another divalent cation, such as Zn2+, for the occupancy of sites distributed within zeolitic cavities. In other words, the ternary exchange accentuates the crystalline degradation of X zeolites. An unexpected result also is the no correlation between crystal damage and the external surface area.Keywords: adsorption, crystallinity, ion exchange, zeolite
Procedia PDF Downloads 2585155 Enhancing Patch Time Series Transformer with Wavelet Transform for Improved Stock Prediction
Authors: Cheng-yu Hsieh, Bo Zhang, Ahmed Hambaba
Abstract:
Stock market prediction has long been an area of interest for both expert analysts and investors, driven by its complexity and the noisy, volatile conditions it operates under. This research examines the efficacy of combining the Patch Time Series Transformer (PatchTST) with wavelet transforms, specifically focusing on Haar and Daubechies wavelets, in forecasting the adjusted closing price of the S&P 500 index for the following day. By comparing the performance of the augmented PatchTST models with traditional predictive models such as Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM) networks, and Transformers, this study highlights significant enhancements in prediction accuracy. The integration of the Daubechies wavelet with PatchTST notably excels, surpassing other configurations and conventional models in terms of Mean Absolute Error (MAE) and Mean Squared Error (MSE). The success of the PatchTST model paired with Daubechies wavelet is attributed to its superior capability in extracting detailed signal information and eliminating irrelevant noise, thus proving to be an effective approach for financial time series forecasting.Keywords: deep learning, financial forecasting, stock market prediction, patch time series transformer, wavelet transform
Procedia PDF Downloads 505154 Knowledge-Driven Decision Support System Based on Knowledge Warehouse and Data Mining by Improving Apriori Algorithm with Fuzzy Logic
Authors: Pejman Hosseinioun, Hasan Shakeri, Ghasem Ghorbanirostam
Abstract:
In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.Keywords: decision support system, data mining, knowledge discovery, data discovery, fuzzy logic
Procedia PDF Downloads 3355153 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 2905152 Understanding Cognitive Fatigue From FMRI Scans With Self-supervised Learning
Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Fillia Makedon, Glenn Wylie
Abstract:
Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that records neural activations in the brain by capturing the blood oxygen level in different regions based on the task performed by a subject. Given fMRI data, the problem of predicting the state of cognitive fatigue in a person has not been investigated to its full extent. This paper proposes tackling this issue as a multi-class classification problem by dividing the state of cognitive fatigue into six different levels, ranging from no-fatigue to extreme fatigue conditions. We built a spatio-temporal model that uses convolutional neural networks (CNN) for spatial feature extraction and a long short-term memory (LSTM) network for temporal modeling of 4D fMRI scans. We also applied a self-supervised method called MoCo (Momentum Contrast) to pre-train our model on a public dataset BOLD5000 and fine-tuned it on our labeled dataset to predict cognitive fatigue. Our novel dataset contains fMRI scans from Traumatic Brain Injury (TBI) patients and healthy controls (HCs) while performing a series of N-back cognitive tasks. This method establishes a state-of-the-art technique to analyze cognitive fatigue from fMRI data and beats previous approaches to solve this problem.Keywords: fMRI, brain imaging, deep learning, self-supervised learning, contrastive learning, cognitive fatigue
Procedia PDF Downloads 1895151 Modeling and Experimental Verification of Crystal Growth Kinetics in Glass Forming Alloys
Authors: Peter K. Galenko, Stefanie Koch, Markus Rettenmayr, Robert Wonneberger, Evgeny V. Kharanzhevskiy, Maria Zamoryanskaya, Vladimir Ankudinov
Abstract:
We analyze the structure of undercooled melts, crystal growth kinetics and amorphous/crystalline microstructure of rapidly solidifying glass-forming Pd-based and CuZr-based alloys. A dendrite growth model is developed using a combination of the kinetic phase-field model and mesoscopic sharp interface model. The model predicts features of crystallization kinetics in alloys from thermodynamically controlled growth (governed by the Gibbs free energy change on solidification) to the kinetically limited regime (governed by atomic attachment-detachment processes at the solid/liquid interface). Comparing critical undercoolings observed in the crystallization kinetics with experimental data on melt viscosity, atomistic simulation's data on liquid microstructure and theoretically predicted dendrite growth velocity allows us to conclude that the dendrite growth kinetics strongly depends on the cluster structure changes of the melt. The obtained data of theoretical and experimental investigations are used for interpretation of microstructure of samples processed in electro-magnetic levitator on board International Space Station in the frame of the project "MULTIPHAS" (European Space Agency and German Aerospace Center, 50WM1941) and "KINETIKA" (ROSKOSMOS).Keywords: dendrite, kinetics, model, solidification
Procedia PDF Downloads 1205150 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine
Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.Keywords: diesel fuel, CFD, evaporation, multiphase
Procedia PDF Downloads 3435149 Temporal Profile of T2 MRI and 1H-MRS in the MDX Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K.Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is an X-linked, lethal muscle wasting disease for which there are currently no treatment that effectively prevents the muscle necrosis and progressive muscle loss. DMD is among the most common of inherited diseases affecting around 1/3500 live male births. MDX (X-linked muscular dystrophy) mice only partially encapsulate the disease in humans and display weakness in muscles, muscle damage and edema during a period deemed the “critical period” when these mice go through cycles of muscular degeneration and regeneration. Although the MDX mutant mouse model has been extensively studied as a model for DMD, to-date an extensive temporal, non-invasive imaging profile that utilizes magnetic resonance imaging (MRI) and 1H-magnetic resonance spectroscopy (1H-MRS) has not been performed.. In addition, longitudinal imaging characterization has not coincided with attempts to exacerbate the progressive muscle damage by exercise. In this study we employed an 11.7 T small animal MRI in order to characterize the MRI and MRS profile of MDX mice longitudinally during a 12 month period during which MDX mice were subjected to exercise. Male mutant MDX mice (n=15) and male wild-type mice (n=15) were subjected to a chronic exercise regime of treadmill walking (30 min/ session) bi-weekly over the whole 12 month follow-up period. Mouse gastrocnemius and tibialis anterior muscles were profiled with baseline T2-MRI and 1H-MRS at 6 weeks of age. Imaging and spectroscopy was repeated again at 3 months, 6 months, 9 months and 12 months of age. Plasma creatine kinase (CK) level measurements were coincided with time-points for T2-MRI and 1H-MRS, but also after the “critical period” at 10 weeks of age. The results obtained from this study indicate that chronic exercise extends dystrophic phenotype of MDX mice as evidenced by T2-MRI and1H-MRS. T2-MRI revealed extent and location of the muscle damage in gastrocnemius and tibialis anterior muscles as hyperintensities (lesions and edema) in exercised MDX mice over follow-up period.. The magnitude of the muscle damage remained stable over time in exercised mice. No evident fat infiltration or cumulation to the muscle tissues was seen at any time-point in exercised MDX mice. Creatine, choline and taurine levels evaluated by 1H-MRS from the same muscles were found significantly decreased in each time-point, Extramyocellular (EMCL) and intramyocellular lipids (IMCL) did not change in exercised mice supporting the findings from anatomical T2-MRI scans for fat content. Creatine kinase levels were found to be significantly higher in exercised MDX mice during the follow-up period and importantly CK levels remained stable over the whole follow-up period. Taken together, we have described here longitudinal prophile for muscle damage and muscle metabolic changes in MDX mice subjected to chronic exercised. The extent of the muscle damage by T2-MRI was found to be stable through the follow-up period in muscles examined. In addition, metabolic profile, especially creatine, choline and taurine levels in muscles, was found to be sustained between time-points. The anatomical muscle damage evaluated by T2-MRI was supported by plasma CK levels which remained stable over the follow-up period. These findings show that non-invasive imaging and spectroscopy can be used effectively to evaluate chronic muscle pathology. These techniques can be also used to evaluate the effect of various manipulations, like here exercise, on the phenotype of the mice. Many of the findings we present here are translatable to clinical disease, such as decreased creatine, choline and taurine levels in muscles. Imaging by T2-MRI and 1H-MRS also revealed that fat content or extramyocellar and intramyocellular lipids, respectively, are not changed in MDX mice, which is in contrast to clinical manifestation of the Duchenne’s muscle dystrophy. Findings show that non-invasive imaging can be used to characterize the phenotype of a MDX model and its translatability to clinical disease, and to study events that have traditionally been not examined, like here rigorous exercise related sustained muscle damage after the “critical period”. The ability for this model to display sustained damage beyond the spontaneous “critical period“ and in turn to study drug effects on this extended phenotype will increase the value of the MDX mouse model as a tool to study therapies and treatments aimed at DMD and associated diseases.Keywords: 1H-MRS, MRI, muscular dystrophy, mouse model
Procedia PDF Downloads 3575148 Comparison of Dose Rate and Energy Dependence of Soft Tissue Equivalence Dosimeter with Electron and Photon Beams Using Magnetic Resonance Imaging
Authors: Bakhtiar Azadbakht, Karim Adinehvand, Amin Sahebnasagh
Abstract:
The purpose of this study was to evaluate dependence of PAGAT polymer gel dosimeter 1/T2 on different electron and photon energies as well as on different mean dose rates for a standard clinically used Co-60 therapy unit and an ELECTA linear accelerator. A multi echo sequence with 32 equidistant echoes was used for the evaluation of irradiated polymer gel dosimeters. The optimal post-manufacture irradiation and post imaging times were both determined to be one day. The sensitivity of PAGAT polymer gel dosimeter with irradiation of photon and electron beams was represented by the slope of calibration curve in the linear region measured for each modality. The response of PAGAT gel with photon and electron beams is very similar in the lower dose region. The R2-dose response was linear up to 30Gy. In electron beams the R2-dose response for doses less than 3Gy is not exact, but in photon beams the R2-dose response for doses less than 2Gy is not exact. Dosimeter energy dependence was studied for electron energies of 4, 12 and 18MeV and photon energies of 1.25, 4, 6 and 18MV. Dose rate dependence was studied in 6MeV electron beam and 6MV photon beam with the use of dose rates 80, 160, 240, 320, 400, and 480cGy/min. Evaluation of dosimeters were performed on Siemens Symphony, Germany 1.5T Scanner in the head coil. In this study no trend in polymer-gel dosimeter 1/T2 dependence was found on mean dose rate and energy for electron and photon beams.Keywords: polymer gels, PAGAT gel, electron and photon beams, MRI
Procedia PDF Downloads 4735147 Miniaturization of I-Slot Antenna with Improved Efficiency and Gain
Authors: Mondher Labidi, Fethi Choubani
Abstract:
In this paper, novel miniaturization technique of antenna is proposed using I-slot. Using this technique, gain of antenna can increased for 4dB (antenna only) to 6.6dB for the proposed I-slot antenna and a frequency shift of about 0.45 GHz to 1 GHz is obtained. Also a reduction of the shape size of the antenna is achieved (about 38 %) to operate in the Wi-Fi (2.45 GHz) band.RF Moreover the frequency shift can be controlled by changing the place or the length of the I-slot. Finally the proposed miniature antenna with an improved radiation efficiency and gain was built and tested.Keywords: slot antenna, miniaturization, RF, electrical equivalent circuit (EEC)
Procedia PDF Downloads 2865146 Characterization Transesterification Activity on Thermostable Lipase (LK1) From Local Isolate
Authors: Luxy Grebers Swend Sinaga, Akhmaloka
Abstract:
The global energy crisis, triggered by declining fossil The global energy crisis, triggered by declining fossil fuel reserves and exacerbated by population growth and increasing energy demand, was driven the development of renewable energy sources. One of the green energy alternatives being developed is biodiesel. Transesterification is at the core of biodiesel production, where fatty acids in oil are converted into methyl esters with the aid of a catalyst. Lipases exhibit high activity and stability during catalysis, especially under harsh conditions. Lipase (Lk1) isolated from organic waste compost at the Bandung Institute of Technology, Bandung, West Java, shows promising potential in this field. The thermostable lipase was purified using Ni-NTA affinity chromatography, followed by SDS-PAGE analysis for purity confirmation. Characterizing the transesterification activity of Lk1 is essential for assessing its effectiveness in converting oil into biodiesel, including methyl esters. The results of this study showed that Lk1 exhibited the highest activity on a methyl palmitate substrate, with an optimum temperature of 60°C, very stable activity in the non-polar solvent n-hexane, and was able to maintain its optimum activity for up to 1 hour. These characters make Lk1 highly suitable for biodiesel production, as it meets the main criteria for the transesterification process in producing renewable energy.Keywords: biodiesel, lipase Lk1, transesterification, renewable energy, thermostability
Procedia PDF Downloads 245145 Smartphone-Based Human Activity Recognition by Machine Learning Methods
Authors: Yanting Cao, Kazumitsu Nawata
Abstract:
As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.Keywords: smart sensors, human activity recognition, artificial intelligence, SVM
Procedia PDF Downloads 1445144 A Study on Aquatic Bycatch Mortality Estimation Due to Prawn Seed Collection and Alteration of Collection Method through Sustainable Practices in Selected Areas of Sundarban Biosphere Reserve (SBR), India
Authors: Samrat Paul, Satyajit Pahari, Krishnendu Basak, Amitava Roy
Abstract:
Fishing is one of the pivotal livelihood activities, especially in developing countries. Today it is considered an important occupation for human society from the era of human settlement began. In simple terms, non-target catches of any species during fishing can be considered as ‘bycatch,’ and fishing bycatch is neither a new fishery management issue nor a new problem. Sundarban is one of the world’s largest mangrove land expanding up to 10,200 sq. km in India and Bangladesh. This largest mangrove biome resource is used by the local inhabitants commercially to run their livelihood, especially by forest fringe villagers (FFVs). In Sundarban, over-fishing, especially post larvae collection of wild Penaeus monodon, is one of the major concerns, as during the collection of P. monodon, different aquatic species are destroyed as a result of bycatch mortality which changes in productivity and may negatively impact entire biodiversity, of the ecosystem. Wild prawn seed collection gear like a small mesh sized net poses a serious threat to aquatic stocks, where the collection isn’t only limited to prawn seed larvae. As prawn seed collection processes are inexpensive, require less monetary investment, and are lucrative; people are easily engaged here as their source of income. Wildlife Trust of India’s (WTI) intervention in selected forest fringe villages of Sundarban Tiger Reserve (STR) was to estimate and reduce the mortality of aquatic bycatches by involving local communities in newly developed release method and their time engagement in prawn seed collection (PSC) by involving them in Alternate Income Generation (AIG). The study was conducted for their taxonomic identification during the period of March to October 2019. Collected samples were preserved in 70% ethyl alcohol for identification, and all the preserved bycatch samples were identified morphologically by the expertise of the Zoological Survey of India (ZSI), Kolkata. Around 74 different aquatic species, where 11 different species are molluscs, 41 fish species, out of which 31 species were identified, and 22 species of crustacean collected, out of which 18 species were identified. Around 13 different species belong to a different order, and families were unable to identify them morphologically as they were collected in the juvenile stage. The study reveals that for collecting one single prawn seed, eight individual life of associated faunas are being lost. Zero bycatch mortality is not practical; rather, collectors should focus on bycatch reduction by avoiding capturing, allowing escaping, and mortality reduction, and must make changes in their fishing method by increasing net mesh size, which will avoid non-target captures. But as the prawns are small in size (generally 1-1.5 inches in length), thus increase net size making economically less or no profit for collectors if they do so. In this case, returning bycatches is considered one of the best ways to a reduction in bycatch mortality which is a more sustainable practice.Keywords: bycatch mortality, biodiversity, mangrove biome resource, sustainable practice, Alternate Income Generation (AIG)
Procedia PDF Downloads 1515143 Radical Technological Innovation - Comparison of a Critical Success Factors Framework with Existing Literature
Authors: Florian Wohlfeil, Orestis Terzidis, Louisa Hellmann
Abstract:
Radical technological innovations enable companies to reach strong market positions and are thus desirable. On the other hand, the innovation process is related to significant costs and risks. Hence, the knowledge of the factors that influence success is crucial for technology driven companies. In a previous study, we have developed a conceptual framework of 25 Critical Success Factors for radical technological innovations and mapped them to four main categories: Technology, Organization, Market, and Process. We refer to it as the Technology-Organization-Market-Process (TOMP) framework. Taking the TOMP framework as a reference model, we conducted a structured and focused literature review of eleven standard books on the topic of radical technological innovation. With this approach, we aim to evaluate, expand, and clarify the set of Critical Success Factors detailed in the TOMP framework. Overall, the set of factors and their allocation to the main categories of the TOMP framework could be confirmed. However, the factor organizational home is not emphasized and discussed in most of the reviewed literature. On the other hand, an additional factor that has not been part of the TOMP framework is described to be important – strategy fit. Furthermore, the factors strategic alliances and platform strategy appear in the literature but in a different context compared to the reference model.Keywords: Critical Success Factors, radical technological innovation, TOMP framework, innovation process
Procedia PDF Downloads 6595142 Study the Effects of Increasing Unsaturation in Palm Oil and Incorporation of Carbon Nanotubes on Resinous Properties
Authors: Muhammad R. Islam, Mohammad Dalour H. Beg, Saidatul S. Jamari
Abstract:
Considering palm oil as non-drying oil owing to its low iodine value, an attempt was taken to increase the unsaturation in the fatty acid chains of palm oil for the preparation of alkyds. To increase the unsaturation in the palm oil, sulphuric acid (SA) and para-toluene sulphonic acid (PTSA) was used prior to alcoholysis for the dehydration process. The iodine number of the oil samples was checked for the unsaturation measurement by Wijs method. Alkyd resin was prepared using the dehydrated palm oil by following alcoholysis and esterification reaction. To improve the film properties 0.5 wt% multi-wall carbon nano tubes (MWCNTs) were used to manufacture polymeric film. The properties of the resins were characterized by various physico-chemical properties such as density, viscosity, iodine value, acid value, saponification value, etc. Structural elucidation was confirmed by Fourier transform of infrared spectroscopy and proton nuclear magnetic resonance; surfaces of the cured films were observed by scanning electron microscopy. In addition, pencil hardness and chemical resistivity was also measured by using standard methods. The effect of enhancement of the unsaturation in the fatty acid chain found significant and motivational. The resin prepared with dehydrated palm oil showed improved properties regarding hardness and chemical resistivity testing. The incorporation of MWCNTs enhanced the thermal stability and hardness of the films as well.Keywords: alkyd resin, nano-coatings, dehydration, palm oil
Procedia PDF Downloads 3105141 Quantitative and Qualitative Analysis of Randomized Controlled Trials in Physiotherapy from India
Authors: K. Hariohm, V. Prakash, J. Saravana Kumar
Abstract:
Introduction and Rationale: Increased scope of Physiotherapy (PT) practice also has contributed to research in the field of PT. It is essential to determine the production and quality of the clinical trials from India since, it may reflect the scientific growth of the profession. These trends can be taken as a baseline to measure our performance and also can be used as a guideline for the future trials. Objective: To quantify and analyze qualitatively the RCT’s from India from the period 2000-2013’ May, and classify data for the information process. Methods: Studies were searched in the Medline database using the key terms “India”, “Indian”, “Physiotherapy”. Clinical trials only with PT authors were included. Trials out of scope of PT practice and on animals were excluded. Retrieved valid articles were analyzed for published year, type of participants, area of study, PEDro score, outcome measure domains of impairment, activity, participation; ‘a priori’ sample size calculation, region, and explanation of the intervention. Result: 45 valid articles were retrieved from the year 2000-2013’ May. The majority of articles were done on symptomatic participants (81%). The frequencies of conditions repeated more were low back pain (n-7) and diabetes (n-4). PEDro score with mode 5 and upper limit of 8 and lower limit 4 was found. 97.2% of studies measure the outcome at the impairment level, 34% in activity level, and 27.8% in participation level. 29.7% of studies did ‘a priori’ sample size calculation. Correlation of year trend and PEDro score found to be not significant (p>.05). Individual PEDro item analysis showed, randomization (100%), concealment (33%) baseline (76%), blinding-subject, therapist, assessor (9.1%, 0%, 10%), follow-up (89%) ITT (15%), statistics between groups (100%), measures of variance (88 %). Conclusion: The trend shows an upward slope in terms of RCTs published from India which is a good indicator. The qualitative analysis showed some gaps in the clinical trial design, which can be expected to be, fulfilled by the future researchers.Keywords: RCT, PEDro, physical therapy, rehabilitation
Procedia PDF Downloads 3425140 On the Use of Machine Learning for Tamper Detection
Authors: Basel Halak, Christian Hall, Syed Abdul Father, Nelson Chow Wai Kit, Ruwaydah Widaad Raymode
Abstract:
The attack surface on computing devices is becoming very sophisticated, driven by the sheer increase of interconnected devices, reaching 50B in 2025, which makes it easier for adversaries to have direct access and perform well-known physical attacks. The impact of increased security vulnerability of electronic systems is exacerbated for devices that are part of the critical infrastructure or those used in military applications, where the likelihood of being targeted is very high. This continuously evolving landscape of security threats calls for a new generation of defense methods that are equally effective and adaptive. This paper proposes an intelligent defense mechanism to protect from physical tampering, it consists of a tamper detection system enhanced with machine learning capabilities, which allows it to recognize normal operating conditions, classify known physical attacks and identify new types of malicious behaviors. A prototype of the proposed system has been implemented, and its functionality has been successfully verified for two types of normal operating conditions and further four forms of physical attacks. In addition, a systematic threat modeling analysis and security validation was carried out, which indicated the proposed solution provides better protection against including information leakage, loss of data, and disruption of operation.Keywords: anti-tamper, hardware, machine learning, physical security, embedded devices, ioT
Procedia PDF Downloads 1535139 Possible Sulfur Induced Superconductivity in Nano-Diamond
Authors: J. Mona, R. R. da Silva, C.-L.Cheng, Y. Kopelevich
Abstract:
We report on a possible occurrence of superconductivity in 5 nm particle size diamond powders treated with sulfur (S) at 500 o C for 10 hours in ~10-2 Torr vacuum. Superconducting-like magnetization hysteresis loops M(H) have been measured up to ~ 50 K by means of the SQUID magnetometer (Quantum Design). Both X-ray (Θ-2Θ geometry) and Raman spectroscopy analyses revealed no impurity or additional phases. Nevertheless, the measured Raman spectra are characteristic to the diamond with embedded disordered carbon and/or graphitic fragments suggesting a link to the previous reports of the local or surface superconductivity in graphite- and amorphous carbon–sulfur composites.Keywords: nanodiamond, sulfur, superconductivity, Raman spectroscopy
Procedia PDF Downloads 4935138 Evaluating Models Through Feature Selection Methods Using Data Driven Approach
Authors: Shital Patil, Surendra Bhosale
Abstract:
Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE
Procedia PDF Downloads 1185137 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials
Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita
Abstract:
Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides
Procedia PDF Downloads 2265136 Evaluating Data Maturity in Riyadh's Nonprofit Sector: Insights Using the National Data Maturity Index (NDI)
Authors: Maryam Aloshan, Imam Mohammad Ibn Saud, Ahmad Khudair
Abstract:
This study assesses the data governance maturity of nonprofit organizations in Riyadh, Saudi Arabia, using the National Data Maturity Index (NDI) framework developed by the Saudi Data and Artificial Intelligence Authority (SDAIA). Employing a survey designed around the NDI model, data maturity levels were evaluated across 14 dimensions using a 5-point Likert scale. The results reveal a spectrum of maturity levels among the organizations surveyed: while some medium-sized associations reached the ‘Defined’ stage, others, including large associations, fell within the ‘Absence of Capabilities’ or ‘Building’ phases, with no organizations achieving the advanced ‘Established’ or ‘Pioneering’ levels. This variation suggests an emerging recognition of data governance but underscores the need for targeted interventions to bridge the maturity gap. The findings point to a significant opportunity to elevate data governance capabilities in Saudi nonprofits through customized capacity-building initiatives, including training, mentorship, and best practice sharing. This study contributes valuable insights into the digital transformation journey of the Saudi nonprofit sector, aligning with national goals for data-driven governance and organizational efficiency.Keywords: nonprofit organizations-national data maturity index (NDI), Saudi Arabia- SDAIA, data governance, data maturity
Procedia PDF Downloads 155135 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 325