Search results for: principal curve
1463 Deleterious SNP’s Detection Using Machine Learning
Authors: Hamza Zidoum
Abstract:
This paper investigates the impact of human genetic variation on the function of human proteins using machine-learning algorithms. Single-Nucleotide Polymorphism represents the most common form of human genome variation. We focus on the single amino-acid polymorphism located in the coding region as they can affect the protein function leading to pathologic phenotypic change. We use several supervised Machine Learning methods to identify structural properties correlated with increased risk of the missense mutation being damaging. SVM associated with Principal Component Analysis give the best performance.Keywords: single-nucleotide polymorphism, machine learning, feature selection, SVM
Procedia PDF Downloads 3771462 Geospatial Analysis for Predicting Sinkhole Susceptibility in Greene County, Missouri
Authors: Shishay Kidanu, Abdullah Alhaj
Abstract:
Sinkholes in the karst terrain of Greene County, Missouri, pose significant geohazards, imposing challenges on construction and infrastructure development, with potential threats to lives and property. To address these issues, understanding the influencing factors and modeling sinkhole susceptibility is crucial for effective mitigation through strategic changes in land use planning and practices. This study utilizes geographic information system (GIS) software to collect and process diverse data, including topographic, geologic, hydrogeologic, and anthropogenic information. Nine key sinkhole influencing factors, ranging from slope characteristics to proximity to geological structures, were carefully analyzed. The Frequency Ratio method establishes relationships between attribute classes of these factors and sinkhole events, deriving class weights to indicate their relative importance. Weighted integration of these factors is accomplished using the Analytic Hierarchy Process (AHP) and the Weighted Linear Combination (WLC) method in a GIS environment, resulting in a comprehensive sinkhole susceptibility index (SSI) model for the study area. Employing Jenk's natural break classifier method, the SSI values are categorized into five distinct sinkhole susceptibility zones: very low, low, moderate, high, and very high. Validation of the model, conducted through the Area Under Curve (AUC) and Sinkhole Density Index (SDI) methods, demonstrates a robust correlation with sinkhole inventory data. The prediction rate curve yields an AUC value of 74%, indicating a 74% validation accuracy. The SDI result further supports the success of the sinkhole susceptibility model. This model offers reliable predictions for the future distribution of sinkholes, providing valuable insights for planners and engineers in the formulation of development plans and land-use strategies. Its application extends to enhancing preparedness and minimizing the impact of sinkhole-related geohazards on both infrastructure and the community.Keywords: sinkhole, GIS, analytical hierarchy process, frequency ratio, susceptibility, Missouri
Procedia PDF Downloads 741461 Parameter Estimation via Metamodeling
Authors: Sergio Haram Sarmiento, Arcady Ponosov
Abstract:
Based on appropriate multivariate statistical methodology, we suggest a generic framework for efficient parameter estimation for ordinary differential equations and the corresponding nonlinear models. In this framework classical linear regression strategies is refined into a nonlinear regression by a locally linear modelling technique (known as metamodelling). The approach identifies those latent variables of the given model that accumulate most information about it among all approximations of the same dimension. The method is applied to several benchmark problems, in particular, to the so-called ”power-law systems”, being non-linear differential equations typically used in Biochemical System Theory.Keywords: principal component analysis, generalized law of mass action, parameter estimation, metamodels
Procedia PDF Downloads 5171460 Plastic Behavior of Steel Frames Using Different Concentric Bracing Configurations
Authors: Madan Chandra Maurya, A. R. Dar
Abstract:
Among the entire natural calamities earthquake is the one which is most devastating. If the losses due to all other calamities are added still it will be very less than the losses due to earthquakes. So it means we must be ready to face such a situation, which is only possible if we make our structures earthquake resistant. A review of structural damages to the braced frame systems after several major earthquakes—including recent earthquakes—has identified some anticipated and unanticipated damage. This damage has prompted many engineers and researchers around the world to consider new approaches to improve the behavior of braced frame systems. Extensive experimental studies over the last fourty years of conventional buckling brace components and several braced frame specimens have been briefly reviewed, highlighting that the number of studies on the full-scale concentric braced frames is still limited. So for this reason the study surrounds the words plastic behavior, steel structure, brace frame system. In this study, there are two different analytical approaches which have been used to predict the behavior and strength of an un-braced frame. The first is referred as incremental elasto-plastic analysis a plastic approach. This method gives a complete load-deflection history of the structure until collapse. It is based on the plastic hinge concept for fully plastic cross sections in a structure under increasing proportional loading. In this, the incremental elasto-plastic analysis- hinge by hinge method is used in this study because of its simplicity to know the complete load- deformation history of two storey un-braced scaled model. After that the experiments were conducted on two storey scaled building model with and without bracing system to know the true or experimental load deformation curve of scaled model. Only way, is to understand and analyze these techniques and adopt these techniques in our structures. The study named as Plastic Behavior of Steel Frames using Different Concentric Bracing Configurations deals with all this. This study aimed at improving the already practiced traditional systems and to check the behavior and its usefulness with respect to X-braced system as reference model i.e. is how plastically it is different from X-braced. Laboratory tests involved determination of plastic behavior of these models (with and without brace) in terms of load-deformation curve. Thus, the aim of this study is to improve the lateral displacement resistance capacity by using new configuration of brace member in concentric manner which is different from conventional concentric brace. Once the experimental and manual results (using plastic approach) compared, simultaneously the results from both approach were also compared with nonlinear static analysis (pushover analysis) approach using ETABS i.e how both the previous results closely depicts the behavior in pushover curve and upto what limit. Tests results shows that all the three approaches behaves somewhat in similar manner upto yield point and also the applicability of elasto-plastic analysis (hinge by hinge method) to know the plastic behavior. Finally the outcome from three approaches shows that the newer one configuration which is chosen for study behaves in-between the plane frame (without brace or reference frame) and the conventional X-brace frame.Keywords: elasto-plastic analysis, concentric steel braced frame, pushover analysis, ETABS
Procedia PDF Downloads 2291459 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding
Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta
Abstract:
Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration
Procedia PDF Downloads 1601458 Key Transfer Protocol Based on Non-invertible Numbers
Authors: Luis A. Lizama-Perez, Manuel J. Linares, Mauricio Lopez
Abstract:
We introduce a method to perform remote user authentication on what we call non-invertible cryptography. It exploits the fact that the multiplication of an invertible integer and a non-invertible integer in a ring Zn produces a non-invertible integer making infeasible to compute factorization. The protocol requires the smallest key size when is compared with the main public key algorithms as Diffie-Hellman, Rivest-Shamir-Adleman or Elliptic Curve Cryptography. Since we found that the unique opportunity for the eavesdropper is to mount an exhaustive search on the keys, the protocol seems to be post-quantum.Keywords: invertible, non-invertible, ring, key transfer
Procedia PDF Downloads 1791457 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)
Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira
Abstract:
Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina
Procedia PDF Downloads 2121456 Polymer Mediated Interaction between Grafted Nanosheets
Authors: Supriya Gupta, Paresh Chokshi
Abstract:
Polymer-particle interactions can be effectively utilized to produce composites that possess physicochemical properties superior to that of neat polymer. The incorporation of fillers with dimensions comparable to polymer chain size produces composites with extra-ordinary properties owing to very high surface to volume ratio. The dispersion of nanoparticles is achieved by inducing steric repulsion realized by grafting particles with polymeric chains. A comprehensive understanding of the interparticle interaction between these functionalized nanoparticles plays an important role in the synthesis of a stable polymer nanocomposite. With the focus on incorporation of clay sheets in a polymer matrix, we theoretically construct the polymer mediated interparticle potential for two nanosheets grafted with polymeric chains. The self-consistent field theory (SCFT) is employed to obtain the inhomogeneous composition field under equilibrium. Unlike the continuum models, SCFT is built from the microscopic description taking in to account the molecular interactions contributed by both intra- and inter-chain potentials. We present the results of SCFT calculations of the interaction potential curve for two grafted nanosheets immersed in the matrix of polymeric chains of dissimilar chemistry to that of the grafted chains. The interaction potential is repulsive at short separation and shows depletion attraction for moderate separations induced by high grafting density. It is found that the strength of attraction well can be tuned by altering the compatibility between the grafted and the mobile chains. Further, we construct the interaction potential between two nanosheets grafted with diblock copolymers with one of the blocks being chemically identical to the free polymeric chains. The interplay between the enthalpic interaction between the dissimilar species and the entropy of the free chains gives rise to a rich behavior in interaction potential curve obtained for two separate cases of free chains being chemically similar to either the grafted block or the free block of the grafted diblock chains.Keywords: clay nanosheets, polymer brush, polymer nanocomposites, self-consistent field theory
Procedia PDF Downloads 2521455 Proposals of Exposure Limits for Infrasound From Wind Turbines
Authors: M. Pawlaczyk-Łuszczyńska, T. Wszołek, A. Dudarewicz, P. Małecki, M. Kłaczyński, A. Bortkiewicz
Abstract:
Human tolerance to infrasound is defined by the hearing threshold. Infrasound that cannot be heard (or felt) is not annoying and is not thought to have any other adverse or health effects. Recent research has largely confirmed earlier findings. ISO 7196:1995 recommends the use of G-weighted characteristics for the assessment of infrasound. There is a strong correlation between G-weighted SPL and annoyance perception. The aim of this study was to propose exposure limits for infrasound from wind turbines. However, only a few countries have set limits for infrasound. These limits are usually no higher than 85-92 dBG, and none of them are specific to wind turbines. Over the years, a number of studies have been carried out to determine hearing thresholds below 20 Hz. It has been recognized that 10% of young people would be able to perceive 10 Hz at around 90 dB, and it has also been found that the difference in median hearing thresholds between young adults aged around 20 years and older adults aged over 60 years is around 10 dB, irrespective of frequency. This shows that older people (up to about 60 years of age) retain good hearing in the low frequency range, while their sensitivity to higher frequencies is often significantly reduced. In terms of exposure limits for infrasound, the average hearing threshold corresponds to a tone with a G-weighted SPL of about 96 dBG. In contrast, infrasound at Lp,G levels below 85-90 dBG is usually inaudible. The individual hearing threshold can, therefore be 10-15 dB lower than the average threshold, so the recommended limits for environmental infrasound could be 75 dBG or 80 dBG. It is worth noting that the G86 curve has been taken as the threshold of auditory perception of infrasound reached by 90-95% of the population, so the G75 and G80 curves can be taken as the criterion curve for wind turbine infrasound. Finally, two assessment methods and corresponding exposure limit values have been proposed for wind turbine infrasound, i.e. method I - based on G-weighted sound pressure level measurements and method II - based on frequency analysis in 1/3-octave bands in the frequency range 4-20 Hz. Separate limit values have been set for outdoor living areas in the open countryside (Area A) and for noise sensitive areas (Area B). In the case of Method I, infrasound limit values of 80 dBG (for areas A) and 75 dBG (for areas B) have been proposed, while in the case of Method II - criterion curves G80 and G75 have been chosen (for areas A and B, respectively).Keywords: infrasound, exposure limit, hearing thresholds, wind turbines
Procedia PDF Downloads 831454 E-Management and Firm Performance: An Empirical Study in Tunisian Firms
Authors: Khlif Hamadi
Abstract:
The principal aim of our research is to analyze the impact of the adoption of e-management approach on the performance of Tunisian firms. The method of structural equation was adopted to conduct our exploratory and confirmatory analysis. The results arising from the questionnaire sent to 155 E-managers affirm that the adoption of e-management approach influences the performance of Tunisian firms. The results of the questionnaire show that e-management favors the deployment of ICT usage and contributes enormously to the performance of the modern enterprise. The theoretical and practical implications of the study, as well as directions for future research, are discussed.Keywords: e-management, ICT Deployment, organizational performance, e-manager
Procedia PDF Downloads 3431453 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 1341452 Curve Designing Using an Approximating 4-Point C^2 Ternary Non-Stationary Subdivision Scheme
Authors: Muhammad Younis
Abstract:
A ternary 4-point approximating non-stationary subdivision scheme has been introduced that generates the family of $C^2$ limiting curves. The theory of asymptotic equivalence is being used to analyze the convergence and smoothness of the scheme. The comparison of the proposed scheme has been demonstrated using different examples with the existing 4-point ternary approximating schemes, which shows that the limit curves of the proposed scheme behave more pleasantly and can generate conic sections as well.Keywords: ternary, non-stationary, approximation subdivision scheme, convergence and smoothness
Procedia PDF Downloads 4771451 Spectroscopic Constant Calculation of the BeF Molecule
Authors: Nayla El-Kork, Farah Korjieh, Ahmed Bentiba, Mahmoud Korek
Abstract:
Ab-initio calculations have been performed to investigate the spectroscopic constants for the diatomic compound BeF. Values of the internuclear distance Re, the harmonic frequency ωe, the rotational constants Be, the electronic transition energy with respect to the ground state Te, the eignvalues Ev, the abscissas of the turning points Rmin, Rmax, the rotational constants Bv and the centrifugal distortion constants Dv have been calculated for the molecule’s ground and excited electronic states. Results are in agreement with experimental data.Keywords: spectroscopic constant, potential energy curve, diatomic molecule, spectral analysis
Procedia PDF Downloads 5691450 Challenges and Lessons of Mentoring Processes for Novice Principals: An Exploratory Case Study of Induction Programs in Chile
Authors: Carolina Cuéllar, Paz González
Abstract:
Research has shown that school leadership has a significant indirect effect on students’ achievements. In Chile, evidence has also revealed that this impact is stronger in vulnerable schools. With the aim of strengthening school leadership, public policy has taken up the challenge of enhancing capabilities of novice principals through the implementation of induction programs, which include a mentoring component, entrusting the task of delivering these programs to universities. The importance of using mentoring or coaching models in the preparation of novice school leaders has been emphasized in the international literature. Thus, it can be affirmed that building leadership capacity through partnership is crucial to facilitate cognitive and affective support required in the initial phase of the principal career, gain role clarification and socialization in context, stimulate reflective leadership practice, among others. In Chile, mentoring is a recent phenomenon in the field of school leadership and it is even more new in the preparation of new principals who work in public schools. This study, funded by the Chilean Ministry of Education, sought to explore the challenges and lessons arising from the design and implementation of mentoring processes which are part of the induction programs, according to the perception of the different actors involved: ministerial agents, university coordinators, mentors and novice principals. The investigation used a qualitative design, based on a study of three cases (three induction programs). The sources of information were 46 semi-structured interviews, applied in two moments (at the beginning and end of mentoring). Content analysis technique was employed. Data focused on the uniqueness of each case and the commonalities within the cases. Five main challenges and lessons emerged in the design and implementation of mentoring within the induction programs for new principals from Chilean public schools. They comprised the need of (i) developing a shared conceptual framework on mentoring among the institutions and actors involved, which helps align the expectations for the mentoring component within the induction programs, along with assisting in establishing a theory of action of mentoring that is relevant to the public school context; (ii) recognizing trough actions and decisions at different levels that the role of a mentor differs from the role of a principal, which challenge the idea that an effective principal will always be an effective mentor; iii) improving mentors’ selection and preparation processes trough the definition of common guiding criteria to ensure that a mentor takes responsibility for developing critical judgment of novice principals, which implies not limiting the mentor’s actions to assist in the compliance of prescriptive practices and standards; (iv) generating common evaluative models with goals, instruments and indicators consistent with the characteristics of mentoring processes, which helps to assess expected results and impact; and (v) including the design of a mentoring structure as an outcome of the induction programs, which helps sustain mentoring within schools as a collective professional development practice. Results showcased interwoven elements that entail continuous negotiations at different levels. Taking action will contribute to policy efforts aimed at professionalizing the leadership role in public schools.Keywords: induction programs, mentoring, novice principals, school leadership preparation
Procedia PDF Downloads 1251449 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 1571448 Parameter Estimation in Dynamical Systems Based on Latent Variables
Authors: Arcady Ponosov
Abstract:
A novel mathematical approach is suggested, which facilitates a compressed representation and efficient validation of parameter-rich ordinary differential equation models describing the dynamics of complex, especially biology-related, systems and which is based on identification of the system's latent variables. In particular, an efficient parameter estimation method for the compressed non-linear dynamical systems is developed. The method is applied to the so-called 'power-law systems' being non-linear differential equations typically used in Biochemical System Theory.Keywords: generalized law of mass action, metamodels, principal components, synergetic systems
Procedia PDF Downloads 3551447 SOTM: A New Cooperation Based Trust Management System for VANET
Authors: Amel Ltifi, Ahmed Zouinkhi, Mohamed Salim Bouhlel
Abstract:
Security and trust management in Vehicular Ad-hoc NETworks (VANET) is a crucial research domain which is the scope of many researches and domains. Although, the majority of the proposed trust management systems for VANET are based on specific road infrastructure, which may not be present in all the roads. Therefore, road security should be managed by vehicles themselves. In this paper, we propose a new Self Organized Trust Management system (SOTM). This system has the responsibility to cut with the spread of false warnings in the network through four principal components: cooperation, trust management, communication and security.Keywords: ative vehicle, cooperation, trust management, VANET
Procedia PDF Downloads 4301446 Carrying Capacity Estimation for Small Hydro Plant Located in Torrential Rivers
Authors: Elena Carcano, James Ball, Betty Tiko
Abstract:
Carrying capacity refers to the maximum population that a given level of resources can sustain over a specific period. In undisturbed environments, the maximum population is determined by the availability and distribution of resources, as well as the competition for their utilization. This information is typically obtained through long-term data collection. In regulated environments, where resources are artificially modified, populations must adapt to changing conditions, which can lead to additional challenges due to fluctuations in resource availability over time and throughout development. An example of this is observed in hydropower plants, which alter water flow and impact fish migration patterns and behaviors. To assess how fish species can adapt to these changes, specialized surveys are conducted, which provide valuable information on fish populations, sample sizes, and density before and after flow modifications. In such situations, it is highly recommended to conduct hydrological and biological monitoring to gain insight into how flow reductions affect species adaptability and to prevent unfavorable exploitation conditions. This analysis involves several planned steps that help design appropriate hydropower production while simultaneously addressing environmental needs. Consequently, the study aims to strike a balance between technical assessment, biological requirements, and societal expectations. Beginning with a small hydro project that requires restoration, this analysis focuses on the lower tail of the Flow Duration Curve (FDC), where both hydrological and environmental goals can be met. The proposed approach involves determining the threshold condition that is tolerable for the most vulnerable species sampled (Telestes Muticellus) by identifying a low flow value from the long-term FDC. The results establish a practical connection between hydrological and environmental information and simplify the process by establishing a single reference flow value that represents the minimum environmental flow that should be maintained.Keywords: carrying capacity, fish bypass ladder, long-term streamflow duration curve, eta-beta method, environmental flow
Procedia PDF Downloads 401445 Corneal Confocal Microscopy As a Surrogate Marker of Neuronal Pathology In Schizophrenia
Authors: Peter W. Woodruff, Georgios Ponirakis, Reem Ibrahim, Amani Ahmed, Hoda Gad, Ioannis N. Petropoulos, Adnan Khan, Ahmed Elsotouhy, Surjith Vattoth, Mahmoud K. M. Alshawwaf, Mohamed Adil Shah Khoodoruth, Marwan Ramadan, Anjushri Bhagat, James Currie, Ziyad Mahfoud, Hanadi Al Hamad, Ahmed Own, Peter Haddad, Majid Alabdulla, Rayaz A. Malik
Abstract:
Introduction:- We aimed to test the hypothesis that, using corneal confocal microscopy (a non-invasive method for assessing corneal nerve fibre integrity), patients with schizophrenia would show neuronal abnormalities compared with healthy participants. Schizophrenia is a neurodevelopmental and progressive neurodegenerative disease, for which there are no validated biomarkers. Corneal confocal microscopy (CCM) is a non-invasive ophthalmic imaging biomarker that can be used to detect neuronal abnormalities in neuropsychiatric syndromes. Methods:- Patients with schizophrenia (DSM-V criteria) without other causes of peripheral neuropathy and healthy controls underwent CCM, vibration perception threshold (VPT) and sudomotor function testing. The diagnostic accuracy of CCM in distinguishing patients from controls was assessed using the area under the curve (AUC) of the Receiver Operating Characterstics (ROC) curve. Findings:- Participants with schizophrenia (n=17) and controls (n=38) with comparable age (35.7±8.5 vs 35.6±12.2, P=0.96) were recruited. Patients with schizophrenia had significantly higher body weight (93.9±25.5 vs 77.1±10.1, P=0.02), lower Low Density Lipoproteins (2.6±1.0 vs 3.4±0.7, P=0.02), but comparable systolic and diastolic blood pressure, HbA1c, total cholesterol, triglycerides and High Density Lipoproteins were comparable with control participants. Patients with schizophrenia had significantly lower corneal nerve fiber density (CNFD, fibers/mm2) (23.5±7.8 vs 35.6±6.5, p<0.0001), branch density (CNBD, branches/mm2) (34.4±26.9 vs 98.1±30.6, p<0.0001), and fiber length (CNFL, mm/mm2) (14.3±4.7 vs 24.2±3.9, p<0.0001) but no difference in VPT (6.1±3.1 vs 4.5±2.8, p=0.12) and electrochemical skin conductance (61.0±24.0 vs 68.9±12.3, p=0.23) compared with controls. The diagnostic accuracy of CNFD, CNBD and CNFL to distinguish patients with schizophrenia from healthy controls were, according to the AUC, (95% CI): 87.0% (76.8-98.2), 93.2% (84.2-102.3), 93.2% (84.4-102.1), respectively. Conclusion:- In conclusion, CCM can be used to help identify neuronal changes and has a high diagnostic accuracy to distinguish subjects with schizophrenia from healthy controls. Procedia PDF Downloads 2751444 Pyramid Binary Pattern for Age Invariant Face Verification
Authors: Saroj Bijarnia, Preety Singh
Abstract:
We propose a simple and effective biometrics system based on face verification across aging using a new variant of texture feature, Pyramid Binary Pattern. This employs Local Binary Pattern along with its hierarchical information. Dimension reduction of generated texture feature vector is done using Principal Component Analysis. Support Vector Machine is used for classification. Our proposed method achieves an accuracy of 92:24% and can be used in an automated age-invariant face verification system.Keywords: biometrics, age invariant, verification, support vector machine
Procedia PDF Downloads 3521443 Laser Beam Bending via Lenses
Authors: Remzi Yildirim, Fatih. V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 4881442 Laser Light Bending via Lenses
Authors: Remzi Yildirim, Fatih V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 7021441 Comparative Efficacy of Angiotensin Converting Enzymes Inhibitors and Angiotensin Receptor Blockers in Patients with Heart Failure in Tanzania: A Prospective Cohort Study
Authors: Mark P. Mayala, Henry Mayala, Khuzeima Khanbhai
Abstract:
Background: Heart failure has been a rising concern in Tanzania. New drugs have been introduced, including the group of drugs called Angiotensin receptor Neprilysin Inhibitor (ARNI), but due to their high cost, angiotensin-converting enzymes inhibitors (ACEIs) and Angiotensin receptor blockers (ARBs) have been mostly used in Tanzania. However, according to our knowledge, the efficacy comparison of the two groups is yet to be studied in Tanzania. The aim of this study was to compare the efficacy of ACEIs and ARBs among patients with heart failure. Methodology: This was a hospital-based prospective cohort study done at Jakaya Kikwete Cardiac Institution (JKCI), Tanzania, from June to December 2020. Consecutive enrollment was done until fulfilling the inclusion criteria. Clinical details were measured at baseline. We assessed the relationship between ARBs and ACEIs users with N-terminal pro-brain natriuretic peptide (NT pro-BNP) levels at admission and at 1-month follow-up using a chi-square test. A Kaplan-Meier curve was used to estimate the survival time of the two groups. Results: 155 HF patients were enrolled, with a mean age of 48 years, whereby 52.3% were male, and their mean left ventricular ejection fraction (LVEF) was 37.3%. 52 (33.5%) heart failure patients were on ACEIs, 57 (36.8%) on ARBs, and 46 (29.7%) were neither using ACEIs nor ARBs. At least half of the patients did not receive a guideline-directed medical therapy (GDMT), with only 82 (52.9%) receiving a GDMT. A drop in NT pro-BNP levels was observed during admission and at 1-month follow-up on both groups, from 6389.2 pg/ml to 4000.1 pg/ml for ARB users and 5877.7 pg/ml to 1328.2 pg/ml for the ACEIs users. There was no statistical difference between the two groups when estimated by the Kaplan-Meier curve, though more deaths were observed in those who were neither on ACEIs nor ARBs, with a calculated P value of 0.01. Conclusion: This study demonstrates that ACEIs have more efficacy and overall better clinical outcome than ARBs, but this should be taken under the patient-based case, considering the side effects of ACEIs and patients’ adherence.Keywords: angiotensin converting enzymes inhibitors, angiotensin receptor blockers, guideline direct medical therapy, N-terminal pro-brain natriuretic peptide
Procedia PDF Downloads 851440 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus
Procedia PDF Downloads 2191439 AgriFood Model in Ankara Regional Innovation Strategy
Authors: Coskun Serefoglu
Abstract:
The study aims to analyse how a traditional sector such as agri-food could be mobilized through regional innovation strategies. A principal component analysis as well as qualitative information, such as in-depth interviews, focus group and surveys, were employed to find the priority sectors. An agri-food model was developed which includes both a linear model and interactive model. The model consists of two main components, one of which is technological integration and the other one is agricultural extension which is based on Land-grant university approach of U.S. which is not a common practice in Turkey.Keywords: regional innovation strategy, interactive model, agri-food sector, local development, planning, regional development
Procedia PDF Downloads 1491438 Template-Assisted Synthesis of IrO2 Nanopores Membrane Electrode Assembly
Authors: Zhuo-Xin Lu, Yan Shi, Chang-Feng Yan, Ying Huang, Yuan Gan, Zhi-Da Wang
Abstract:
With TiO2 nanotube arrays (TNTA) as template, a IrO2 nanopores membrane electrode assembly (MEA) was synthesized by a novel depositi-assemble-etch strategy. By analysing the morphology of IrO2/TNTA and cyclic voltammetry (CV) curve at different deposition cycles, we proposed a reasonable scheme for the process of IrO2 electrodeposition on TNTA. The current density of IrO2/TNTA at 1.5V vs RHE reaches 5.12mA/cm2 after 55 cycles deposition, which shows promising performance for its high OER activity after template removal.Keywords: electrodeposition, IrO2 nanopores, MEA, OER
Procedia PDF Downloads 4461437 Agro-Morphological Traits Based Genetic Diversity Analysis of ‘Ethiopian Dinich’ Plectranthus edulis (Vatke) Agnew Populations Collected from Diverse Agro-Ecologies in Ethiopia
Authors: Fekadu Gadissa, Kassahun Tesfaye, Kifle Dagne, Mulatu Geleta
Abstract:
‘Ethiopian dinich’ also called ‘Ethiopian potato’ is one of the economically important ‘orphan’ edible tuber crops indigenous to Ethiopia. We evaluated the morphological and agronomic traits performances of 174 samples from Ethiopia at multiple locations using 12 qualitative and 16 quantitative traits, recorded at the correct growth stages. We observed several morphotypes and phenotypic variations for qualitative traits along with a wide range of mean performance values for all quantitative traits. Analysis of variance for each quantitative trait showed a highly significant (p<0.001) variation among the collections with eventually non-significant variation for environment-traits interaction for all but flower length. A comparatively high phenotypic and genotypic coefficient of variation was observed for plant height, days to flower initiation, days to 50% flowering and tuber number per hill. Moreover, the variability and coefficients of variation due to genotype-environment interaction was nearly zero for all the traits except flower length. High genotypic coefficients of variation coupled with a high estimate of broad sense heritability and high genetic advance as a percent of collection mean were obtained for tuber weight per hill, number of primary branches per plant, tuber number per hill and number of plants per hill. Association of tuber yield per hectare of land showed a large magnitude of positive phenotypic and genotypic correlation with those traits. Principal components analysis revealed 76% of the total variation for the first six principal axes with high factor loadings again from tuber number per hill, number of primary branches per plant and tuber weight. The collections were grouped into four clusters with the weak region (zone) of origin based pattern. In general, there is high genetic-based variability for ‘Ethiopian dinich’ improvement and conservation. DNA based markers are recommended for further genetic diversity estimation for use in breeding and conservation.Keywords: agro-morphological traits, Ethiopian dinich, genetic diversity, variance components
Procedia PDF Downloads 1901436 Probabilistic Modeling Laser Transmitter
Authors: H. S. Kang
Abstract:
Coupled electrical and optical model for conversion of electrical energy into coherent optical energy for transmitter-receiver link by solid state device is presented. Probability distribution for travelling laser beam switching time intervals and the number of switchings in the time interval is obtained. Selector function mapping is employed to regulate optical data transmission speed. It is established that regulated laser transmission from PhotoActive Laser transmitter follows principal of invariance. This considerably simplifies design of PhotoActive Laser Transmission networks.Keywords: computational mathematics, finite difference Markov chain methods, sequence spaces, singularly perturbed differential equations
Procedia PDF Downloads 4311435 Creation and Validation of a Measurement Scale of E-Management: An Exploratory and Confirmatory Study
Authors: Hamadi Khlif
Abstract:
This paper deals with the understanding of the concept of e-management and the development of a measuring instrument adapted to the new problems encountered during the application of this new practice within the modern enterprise. Two principal e-management factors have been isolated in an exploratory study carried out among 260 participants. A confirmatory study applied to a second sample of 270 participants has been established in a cross-validation of the scale of measurement. The study presents the literature review specifically dedicated to e-management and the results of the exploratory and confirmatory phase of the development of this scale, which demonstrates satisfactory psychometric qualities. The e-management has two dimensions: a managerial dimension and a technological dimension.Keywords: e-management, management, ICT deployment, mode of management
Procedia PDF Downloads 3241434 Hydrogeochemical Assessment, Evaluation and Characterization of Groundwater Quality in Ore, South-Western, Nigeria
Authors: Olumuyiwa Olusola Falowo
Abstract:
One of the objectives of the Millennium Development Goals is to have sustainable access to safe drinking water and basic sanitation. In line with this objective, an assessment of groundwater quality was carried out in Odigbo Local Government Area of Ondo State in November – February, 2019 to assess the drinking, domestic and irrigation uses of the water. Samples from 30 randomly selected ground water sources; 16 shallow wells and 14 from boreholes and analyzed using American Public Health Association method for the examination of water and wastewater. Water quality index calculation, and diagrams such as Piper diagram, Gibbs diagram and Wilcox diagram have been used to assess the groundwater in conjunction with irrigation indices such as % sodium, sodium absorption ratio, permeability index, magnesium ratio, Kelly ratio, and electrical conductivity. In addition statistical Principal component analysis were used to determine the homogeneity and source(s) influencing the chemistry of the groundwater. The results show that all the parameters are within the permissible limit of World Health Organization. The physico-chemical analysis of groundwater samples indicates that the dominant major cations are in decreasing order of Na+, Ca2+, Mg2+, K+ and the dominant anions are HCO-3, Cl-, SO-24, NO-3. The values of water quality index varies suggest a Good water (WQI of 50-75) accounts for 70% of the study area. The dominant groundwater facies revealed in this study are the non-carbonate alkali (primary salinity) exceeds 50% (zone 7); and transition zone with no one cation-anion pair exceeds 50% (zone 9), while evaporation; rock–water interaction, and precipitation; and silicate weathering process are the dominant processes in the hydrogeochemical evolution of the groundwater. The study indicates that waters were found within the permissible limits of irrigation indices adopted, and plot on excellent category on Wilcox plot. In conclusion, the water in the study area are good/suitable for drinking, domestic and irrigation purposes with low equivalent salinity concentrate and moderate electrical conductivity.Keywords: equivalent salinity concentration, groundwater quality, hydrochemical facies, principal component analysis, water-rock interaction
Procedia PDF Downloads 148