Search results for: digital surface model (DSM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 24132

Search results for: digital surface model (DSM)

8082 Mucoadhesive Chitosan-Coated Nanostructured Lipid Carriers for Oral Delivery of Amphotericin B

Authors: S. L. J. Tan, N. Billa, C. J. Roberts

Abstract:

Oral delivery of amphotericin B (AmpB) potentially eliminates constraints and side effects associated with intravenous administration, but remains challenging due to the physicochemical properties of the drug such that it results in meagre bioavailability (0.3%). In an advanced formulation, 1) nanostructured lipid carriers (NLC) were formulated as they can accommodate higher levels of cargoes and restrict drug expulsion and 2) a mucoadhesion feature was incorporated so as to impart sluggish transit of the NLC along the gastrointestinal tract and hence, maximize uptake and improve bioavailability of AmpB. The AmpB-loaded NLC formulation was successfully formulated via high shear homogenisation and ultrasonication. A chitosan coating was adsorbed onto the formed NLC. Physical properties of the formulations; particle size, zeta potential, encapsulation efficiency (%EE), aggregation states and mucoadhesion as well as the effect of the variable pH on the integrity of the formulations were examined. The particle size of the freshly prepared AmpB-loaded NLC was 163.1 ± 0.7 nm, with a negative surface charge and remained essentially stable over 120 days. Adsorption of chitosan caused a significant increase in particle size to 348.0 ± 12 nm with the zeta potential change towards positivity. Interestingly, the chitosan-coated AmpB-loaded NLC (ChiAmpB NLC) showed significant decrease in particle size upon storage, suggesting 'anti-Ostwald' ripening effect. AmpB-loaded NLC formulation showed %EE of 94.3 ± 0.02 % and incorporation of chitosan increased the %EE significantly, to 99.3 ± 0.15 %. This suggests that the addition of chitosan renders stability to the NLC formulation, interacting with the anionic segment of the NLC and preventing the drug leakage. AmpB in both NLC and ChiAmpB NLC showed polyaggregation which is the non-toxic conformation. The mucoadhesiveness of the ChiAmpB NLC formulation was observed in both acidic pH (pH 5.8) and near-neutral pH (pH 6.8) conditions as opposed to AmpB-loaded NLC formulation. Hence, the incorporation of chitosan into the NLC formulation did not only impart mucoadhesive property but also protected against the expulsion of AmpB which makes it well-primed as a potential oral delivery system for AmpB.

Keywords: Amphotericin B, mucoadhesion, nanostructured lipid carriers, oral delivery

Procedia PDF Downloads 162
8081 Insulin Receptor Substrate-1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) Gene Polymorphisms Associated with Type 2 Diabetes Mellitus in Eritreans

Authors: Mengistu G. Woldu, Hani Y. Zaki, Areeg Faggad, Badreldin E. Abdalla

Abstract:

Background: Type 2 diabetes mellitus (T2DM) is a complex, degenerative, and multi-factorial disease, which is culpable for huge mortality and morbidity worldwide. Even though relatively significant numbers of studies are conducted on the genetics domain of this disease in the developed world, there is huge information gap in the sub-Saharan Africa region in general and in Eritrea in particular. Objective: The principal aim of this study was to investigate the association of common variants of the Insulin Receptor Substrate 1 (IRS1) and Transcription Factor 7-Like 2 (TCF7L2) genes with T2DM in the Eritrean population. Method: In this cross-sectional case control study 200 T2DM patients and 112 non-diabetes subjects were participated and genotyping of the IRS1 (rs13431179, rs16822615, 16822644rs, rs1801123) and TCF7L2 (rs7092484) tag SNPs were carries out using PCR-RFLP method of analysis. Haplotype analyses were carried out using Plink version 1.07, and Haploview 4.2 software. Linkage disequilibrium (LD), and Hardy-Weinberg equilibrium (HWE) analyses were performed using the Plink software. All descriptive statistical data analyses were carried out using SPSS (Version-20) software. Throughout the analysis p-value ≤0.05 was considered statistically significant. Result: Significant association was found between rs13431179 SNP of the IRS1 gene and T2DM under the recessive model of inheritance (OR=9.00, 95%CI=1.17-69.07, p=0.035), and marginally significant association found in the genotypic model (OR=7.50, 95%CI=0.94-60.06, p=0.058). The rs7092484 SNP of the TCF7L2 gene also showed markedly significant association with T2DM in the recessive (OR=3.61, 95%CI=1.70-7.67, p=0.001); and allelic (OR=1.80, 95%CI=1.23-2.62, p=0.002) models. Moreover, eight haplotypes of the IRS1 gene found to have significant association withT2DM (p=0.013 to 0.049). Assessments made on the interactions of genotypes of the rs13431179 and rs7092484 SNPs with various parameters demonstrated that high density lipoprotein (HDL), low density lipoprotein (LDL), waist circumference (WC), and systolic blood pressure (SBP) are the best T2DM onset predicting models. Furthermore, genotypes of the rs7092484 SNP showed significant association with various atherogenic indexes (Atherogenic index of plasma, LDL/HDL, and CHLO/HDL); and Eritreans carrying the GG or GA genotypes were predicted to be more susceptible to cardiovascular diseases onset. Conclusions: Results of this study suggest that IRS1 (rs13431179) and TCF7L2 (rs7092484) gene polymorphisms are associated with increased risk of T2DM in Eritreans.

Keywords: IRS1, SNP, TCF7L2, type 2 diabetes

Procedia PDF Downloads 224
8080 Proposed Anticipating Learning Classifier System for Cloud Intrusion Detection (ALCS-CID)

Authors: Wafa' Slaibi Alsharafat

Abstract:

Cloud computing is a modern approach in network environment. According to increased number of network users and online systems, there is a need to help these systems to be away from unauthorized resource access and detect any attempts for privacy contravention. For that purpose, Intrusion Detection System is an effective security mechanism to detect any attempts of attacks for cloud resources and their information. In this paper, Cloud Intrusion Detection System has been proposed in term of reducing or eliminating any attacks. This model concerns about achieving high detection rate after conducting a set of experiments using benchmarks dataset called KDD'99.

Keywords: IDS, cloud computing, anticipating classifier system, intrusion detection

Procedia PDF Downloads 474
8079 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 272
8078 Impacts of Cerium Oxide Nanoparticles on Functional Bacterial Community in Activated Sludge

Authors: I. Kamika, S. Azizi, M. Tekere

Abstract:

Nanotechnology promises significant improvements of advanced materials and manufacturing techniques with a vast range of applications, which are critical for the future competitiveness of national industries. The manipulations and productions of materials, whilst, controlling the optical properties and surface area to a nanosize scale enabled a birth of a new field known as nanotechnology. However, their rapidly developing industry raises concerns about the environmental impacts of nanoparticles, as their effects on functional bacterial community in wastewater treatment remain unclear. The present research assessed the impact of cerium Oxide nanoparticles (nCeO) on the bacterial microbiome of an activated sludge system, which influenced its performance of this system on nutrient removal. Out of 15875 reads sequenced, a total of 13133 reads were non-chimeric. The wastewater samples were more dominant to the unclassified bacteria (51.07% of bacteria community) followed with the classified bacteria (48.93). Proteobacteria was the most dominant phylum in both classified and unclassified bacteria, whereas 18% of bacteria could even not be assigned a phylum and remained unclassified suggesting hitherto vast untapped microbial diversity. The bacterial operational taxonomic units (OTUs) ranged from 1014 to 2629 over the experimental period. The denitrification related species including Diaphorobacter species, Thauera species and those in the Sphaerotilus and Leptothrix group were found to be inhibited in a high concentration of CeO-NP. The diversity indices suggested that the bacterial community inhabiting the wastewater samples were less diverse as the concentration of CeO increases. The canonical correspondence analysis (CCA) results highlighted that the bacterial community variance had the strongest relationship with water temperature, conductivity, pH, and dissolved oxygen (DO) content as well as nCeO. The results provided the relationships between the microbial community and environmental variables in the wastewater samples.

Keywords: bacterial community, next generation, cerium oxide, wastewater, activated sludge, nanoparticles, nanotechnology

Procedia PDF Downloads 217
8077 Design of Reinforced Concrete (RC) Walls Considering Shear Amplification by Nonlinear Dynamic Behavior

Authors: Sunghyun Kim, Hong-Gun Park

Abstract:

In the performance-based design (PBD), by using the nonlinear dynamic analysis (NDA), the actual performance of the structure is evaluated. Unlike frame structures, in the wall structures, base shear force which is resulted from the NDA, is greatly amplified than that from the elastic analysis. This shear amplifying effect causes repeated designs which make designer difficult to apply the PBD. Therefore, in this paper, factors which affect shear amplification were studied. For the 20-story wall model, the NDA was performed. From the analysis results, the base shear amplification factor was proposed.

Keywords: performance based design, shear amplification factor, nonlinear dynamic analysis, RC shear wall

Procedia PDF Downloads 379
8076 Seismic Hazard Assessment of Offshore Platforms

Authors: F. D. Konstandakopoulou, G. A. Papagiannopoulos, N. G. Pnevmatikos, G. D. Hatzigeorgiou

Abstract:

This paper examines the effects of pile-soil-structure interaction on the dynamic response of offshore platforms under the action of near-fault earthquakes. Two offshore platforms models are investigated, one with completely fixed supports and one with piles which are clamped into deformable layered soil. The soil deformability for the second model is simulated using non-linear springs. These platform models are subjected to near-fault seismic ground motions. The role of fault mechanism on platforms’ response is additionally investigated, while the study also examines the effects of different angles of incidence of seismic records on the maximum response of each platform.

Keywords: hazard analysis, offshore platforms, earthquakes, safety

Procedia PDF Downloads 148
8075 AI for Efficient Geothermal Exploration and Utilization

Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson

Abstract:

Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.

Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal

Procedia PDF Downloads 53
8074 3D Modeling of Tunis Soft Soil Settlement Reinforced with Plastic Wastes

Authors: Aya Rezgui, Lasaad Ajam, Belgacem Jalleli

Abstract:

The Tunis soft soils present a difficult challenge as construction sites and for Geotechnical works. Currently, different techniques are used to improve such soil properties taking into account the environmental considerations. One of the recent methods is involving plastic wastes as a reinforcing materials. The present study pertains to the development of a numerical model for predicting the behavior of Tunis Soft soil (TSS) improved with recycled Monobloc chair wastes.3D numerical models for unreinforced TSS and reinforced TSS aims to evaluate settlement reduction and the values of consolidation times in oedometer conditions.

Keywords: Tunis soft soil, settlement, plastic wastes, finte -difference, FLAC3D modeling

Procedia PDF Downloads 134
8073 Numerical Modeling of Large Scale Dam Break Flows

Authors: Amanbek Jainakov, Abdikerim Kurbanaliev

Abstract:

The work presents the results of mathematical modeling of large-scale flows in areas with a complex topographic relief. The Reynolds-averaged Navier—Stokes equations constitute the basis of the three-dimensional unsteady modeling. The well-known Volume of Fluid method implemented in the solver interFoam of the open package OpenFOAM 2.3 is used to track the free-boundary location. The mathematical model adequacy is checked by comparing with experimental data. The efficiency of the applied technology is illustrated by the example of modeling the breakthrough of the dams of the Andijan (Uzbekistan) and Papan (near the Osh town, Kyrgyzstan) reservoir.

Keywords: three-dimensional modeling, free boundary, the volume-of-fluid method, dam break, flood, OpenFOAM

Procedia PDF Downloads 405
8072 Case-Based Reasoning Approach for Process Planning of Internal Thread Cold Extrusion

Authors: D. Zhang, H. Y. Du, G. W. Li, J. Zeng, D. W. Zuo, Y. P. You

Abstract:

For the difficult issues of process selection, case-based reasoning technology is applied to computer aided process planning system for cold form tapping of internal threads on the basis of similarity in the process. A model is established based on the analysis of process planning. Case representation and similarity computing method are given. Confidence degree is used to evaluate the case. Rule-based reuse strategy is presented. The scheme is illustrated and verified by practical application. The case shows the design results with the proposed method are effective.

Keywords: case-based reasoning, internal thread, cold extrusion, process planning

Procedia PDF Downloads 510
8071 CPPI Method with Conditional Floor: The Discrete Time Case

Authors: Hachmi Ben Ameur, Jean Luc Prigent

Abstract:

We propose an extension of the CPPI method, which is based on conditional floors. In this framework, we examine in particular the TIPP and margin based strategies. These methods allow keeping part of the past gains and protecting the portfolio value against future high drawdowns of the financial market. However, as for the standard CPPI method, the investor can benefit from potential market rises. To control the risk of such strategies, we introduce both Value-at-Risk (VaR) and Expected Shortfall (ES) risk measures. For each of these criteria, we show that the conditional floor must be higher than a lower bound. We illustrate these results, for a quite general ARCH type model, including the EGARCH (1,1) as a special case.

Keywords: CPPI, conditional floor, ARCH, VaR, expected ehortfall

Procedia PDF Downloads 305
8070 Evaluating Social Sustainability in Historical City Center in Turkey: Case Study of Bursa

Authors: Şeyda Akçalı

Abstract:

This study explores the concept of social sustainability and its characteristics in terms of neighborhood (mahalle) which is a social phenomenon in Turkish urban life. As social sustainability indicators that moving away traditional themes toward multi-dimensional measures, the solutions for urban strategies may be achieved through learning lessons from historical precedents. It considers the inherent values of traditional urban forms contribute to the evolution of the city as well as the social functions of it. The study aims to measure non-tangible issues in order to evaluate social sustainability in historic urban environments and how they could contribute to the current urban planning strategies. The concept of neighborhood (mahalle) refers to a way of living that represents the organization of Turkish social and communal life rather than defining an administrative unit for the city. The distinctive physical and social features of neighborhood illustrate the link between social sustainability and historic urban environment. Instead of having a nostalgic view of past, it identifies both the failures and successes and extract lessons of traditional urban environments and adopt them to modern context. First, the study determines the aspects of social sustainability which are issued as the key themes in the literature. Then, it develops a model by describing the social features of mahalle which show consistency within the social sustainability agenda. The model is used to analyze the performance of traditional housing area in the historical city center of Bursa, Turkey whether it meets the residents’ social needs and contribute collective functioning of the community. Through a questionnaire survey exercised in the historic neighborhoods, the residents are evaluated according to social sustainability criteria of neighborhood. The results derived from the factor analysis indicate that social aspects of neighborhood are social infrastructure, identity, attachment, neighborliness, safety and wellbeing. Qualitative evaluation shows the relationship between key aspects of social sustainability and demographic and socio-economic factors. The outcomes support that inherent values of neighborhood retain its importance for the sustainability of community although there must be some local arrangements for few factors with great attention not to compromise the others. The concept of neighborhood should be considered as a potential tool to support social sustainability in national political agenda and urban policies. The performance of underlying factors in historic urban environment proposes a basis for both examining and improving traditional urban areas and how it may contribute to the overall city.

Keywords: historical city center, mahalle, neighborhood, social sustainability, traditional urban environment, Turkey

Procedia PDF Downloads 289
8069 Disparities in Language Competence and Conflict: The Moderating Role of Cultural Intelligence in Intercultural Interactions

Authors: Catherine Peyrols Wu

Abstract:

Intercultural interactions are becoming increasingly common in organizations and life. These interactions are often the stage of miscommunication and conflict. In management research, these problems are commonly attributed to cultural differences in values and interactional norms. As a result, the notion that intercultural competence can minimize these challenges is widely accepted. Cultural differences, however, are not the only source of a challenge during intercultural interactions. The need to rely on a lingua franca – or common language between people who have different mother tongues – is another important one. In theory, a lingua franca can improve communication and ease coordination. In practice however, disparities in people’s ability and confidence to communicate in the language can exacerbate tensions and generate inefficiencies. In this study, we draw on power theory to develop a model of disparities in language competence and conflict in a multicultural work context. Specifically, we hypothesized that differences in language competence between interaction partners would be positively related to conflict such that people would report greater conflict with partners who have more dissimilar levels of language competence and lesser conflict with partners with more similar levels of language competence. Furthermore, we proposed that cultural intelligence (CQ) an intercultural competence that denotes an individual’s capability to be effective in intercultural situations, would weaken the relationship between disparities in language competence and conflict such that people would report less conflict with partners who have more dissimilar levels of language competence when the interaction partner has high CQ and more conflict when the partner has low CQ. We tested this model with a sample of 135 undergraduate students working in multicultural teams for 13 weeks. We used a round-robin design to examine conflict in 646 dyads nested within 21 teams. Results of analyses using social relations modeling provided support for our hypotheses. Specifically, we found that in intercultural dyads with large disparities in language competence, partners with the lowest level of language competence would report higher levels of interpersonal conflict. However, this relationship disappeared when the partner with higher language competence was also high in CQ. These findings suggest that communication in a lingua franca can be a source of conflict in intercultural collaboration when partners differ in their level of language competence and that CQ can alleviate these effects during collaboration with partners who have relatively lower levels of language competence. Theoretically, this study underscores the benefits of CQ as a complement to language competence for intercultural effectiveness. Practically, these results further attest to the benefits of investing resources to develop language competence and CQ in employees engaged in multicultural work.

Keywords: cultural intelligence, intercultural interactions, language competence, multicultural teamwork

Procedia PDF Downloads 165
8068 Determining Optimum Locations for Runoff Water Harvesting in W. Watir, South Sinai, Using RS, GIS, and WMS Techniques

Authors: H. H. Elewa, E. M. Ramadan, A. M. Nosair

Abstract:

Rainfall water harvesting is considered as an important tool for overcoming water scarcity in arid and semi-arid region. Wadi Watir in the southeastern part of Sinai Peninsula is considered as one of the main and active basins in the Gulf of Aqaba drainage system. It is characterized by steep hills mainly consist of impermeable rocks, whereas the streambeds are covered by a highly permeable mixture of gravel and sand. A comprehensive approach involving the integration of geographic information systems, remote sensing and watershed modeling was followed to identify the RWH capability in this area. Eight thematic layers, viz volume of annual flood, overland flow distance, maximum flow distance, rock or soil infiltration, drainage frequency density, basin area, basin slope and basin length were used as a multi-parametric decision support system for conducting weighted spatial probability models (WSPMs) to determine the potential areas for the RWH. The WSPMs maps classified the area into five RWH potentiality classes ranging from the very low to very high. Three performed WSPMs' scenarios for W. Watir reflected identical results among their maps for the high and very high RWH potentiality classes, which are the most suitable ones for conducting surface water harvesting techniques. There is also a reasonable match with respect to the potentiality of runoff harvesting areas with a probability of moderate, low and very low among the three scenarios. WSPM results have shown that the high and very high classes, which are the most suitable for the RWH are representing approximately 40.23% of the total area of the basin. Accordingly, several locations were decided for the establishment of water harvesting dams and cisterns to improve the water conditions and living environment in the study area.

Keywords: Sinai, Wadi Watir, remote sensing, geographic information systems, watershed modeling, runoff water harvesting

Procedia PDF Downloads 357
8067 Protein-Enrichment of Oilseed Meals by Triboelectrostatic Separation

Authors: Javier Perez-Vaquero, Katryn Junker, Volker Lammers, Petra Foerst

Abstract:

There is increasing importance to accelerate the transition to sustainable food systems by including environmentally friendly technologies. Our work focuses on protein enrichment and fractionation of agricultural side streams by dry triboelectrostatic separation technology. Materials are fed in particulate form into a system dispersed in a highly turbulent gas stream, whereby the high collision rate of particles against surfaces and other particles greatly enhances the electrostatic charge build-up over the particle surface. A subsequent step takes the charged particles to a delimited zone in the system where there is a highly uniform, intense electric field applied. Because the charge polarity acquired by a particle is influenced by its chemical composition, morphology, and structure, the protein-rich and fiber-rich particles of the starting material get opposite charge polarities, thus following different paths as they move through the region where the electric field is present. The output is two material fractions, which differ in their respective protein content. One is a fiber-rich, low-protein fraction, while the other is a high-protein, low-fiber composition. Prior to testing, materials undergo a milling process, and some samples are stored under controlled humidity conditions. In this way, the influence of both particle size and humidity content was established. We used two oilseed meals: lupine and rapeseed. In addition to a lab-scale separator to perform the experiments, the triboelectric separation process could be successfully scaled up to a mid-scale belt separator, increasing the mass feed from g/sec to kg/hour. The triboelectrostatic separation technology opens a huge potential for the exploitation of so far underutilized alternative protein sources. Agricultural side-streams from cereal and oil production, which are generated in high volumes by the industries, can further be valorized by this process.

Keywords: bench-scale processing, dry separation, protein-enrichment, triboelectrostatic separation

Procedia PDF Downloads 190
8066 Markov-Chain-Based Optimal Filtering and Smoothing

Authors: Garry A. Einicke, Langford B. White

Abstract:

This paper describes an optimum filter and smoother for recovering a Markov process message from noisy measurements. The developments follow from an equivalence between a state space model and a hidden Markov chain. The ensuing filter and smoother employ transition probability matrices and approximate probability distribution vectors. The properties of the optimum solutions are retained, namely, the estimates are unbiased and minimize the variance of the output estimation error, provided that the assumed parameter set are correct. Methods for estimating unknown parameters from noisy measurements are discussed. Signal recovery examples are described in which performance benefits are demonstrated at an increased calculation cost.

Keywords: optimal filtering, smoothing, Markov chains

Procedia PDF Downloads 317
8065 Spatial Dynamic of Pico- and Nano-Phytoplankton Communities in the Mouth of the Seine River

Authors: M. Schapira, S. Françoise, F. Maheux, O. Pierre-Duplessix, E. Rabiller, B. Simon, R. Le Gendre

Abstract:

Pico- and nano-phytoplankton are abundant and ecologically critical components of the autotrophic communities in the pelagic realm. While the role of physical forcing related to tidal cycle, water mass intrusion, nutrient availability, mixing and stratification on microphytoplankton blooms have been widely investigated, these are often overlooked for pico- and nano-phytoplankton especially in estuarine waters. This study investigates changes in abundances and community composition of pico- and nano-phytoplankton under different estuarine tidal conditions in the mouth of the Seine River in relation to nutrient availability, water column stratification and spatially localized currents. Samples were collected each day at high tide, over spring tide to neap tide cycle, from 21 stations homogeneously distributed in the Seine river month in May 2011. Vertical profiles of temperature, salinity and fluorescence were realized at each sampling station. Sub-surface water samples (i.e. 1 m depth) were collected for nutrients (i.e. N, P and Si), phytoplankton biomass (i.e. Chl a) and pico- and nano-phytoplankton enumeration and identification. Pico- and nano-phytoplankton populations were identified and quantified using flow cytometry. Total abundances tend to decrease from spring tide to neap tide. Samples were characterized by high abundances of Synechococcus and Cryptophyceae. The composition of the pico- and nano-phytoplankton varied greatly under the different estuarine tidal conditions. Moreover, at the scale of the river mouth, the pico- and nano-phytoplankton population exhibited patchy distribution patterns that were closely controlled by water mass intrusion from the Sea, freshwater inputs from the Seine River and the geomorphology of the river mouth. This study highlights the importance of physical forcing to the community composition of pico- and nano-phytoplankton that may be critical for the structure of the pelagic food webs in estuarine and adjacent coastal seas.

Keywords: nanophytoplancton, picophytoplankton, physical forcing, river mouth, tidal cycle

Procedia PDF Downloads 357
8064 Consensus-Oriented Analysis Model for Knowledge Management Failure Evaluation in Uncertain Environment

Authors: Amir Ghasem Norouzi, Mahdi Zowghi

Abstract:

This study propose a framework based on the fuzzy T-Norms, T-conorm, a novel operator, and multi-expert approach to help organizations build awareness of the critical influential factors on the success of knowledge management (KM) implementation, analysis the failure of knowledge management. This study considers the complex uncertainty concept that is in knowledge management implementing capability (KMIC) and it is used by fuzzy logic for this reason. The contribution of our paper is shown with an empirical study in a nonprofit educational organization evaluation.

Keywords: fuzzy logic, knowledge management, multi expert analysis, consensus oriented average operator

Procedia PDF Downloads 627
8063 (Re)connecting to the Spirit of the Language: Decolonizing from Eurocentric Indigenous Language Revitalization Methodologies

Authors: Lana Whiskeyjack, Kyle Napier

Abstract:

The Spirit of the language embodies the motivation for indigenous people to connect with the indigenous language of their lineage. While the concept of the spirit of the language is often woven into the discussion by indigenous language revitalizationists, particularly those who are indigenous, there are few tangible terms in academic research conceptually actualizing the term. Through collaborative work with indigenous language speakers, elders, and learners, this research sets out to identify the spirit of the language, the catalysts of disconnection from the spirit of the language, and the sources of reconnection to the spirit of the language. This work fundamentally addresses the terms of engagement around collaboration with indigenous communities, itself inviting a decolonial approach to community outreach and individual relationships. As indigenous researchers, this means beginning, maintain, and closing this work in the ceremony while being transparent with community members in this work and related publishing throughout the project’s duration. Decolonizing this approach also requires maintaining explicit ongoing consent by the elders, knowledge keepers, and community members when handling their ancestral and indigenous knowledge. The handling of this knowledge is regarded in this work as stewardship, both in the handling of digital materials and the handling of ancestral Indigenous knowledge. This work observes recorded conversations in both nêhiyawêwin and English, resulting from 10 semi-structured interviews with fluent nêhiyawêwin speakers as well as three structured dialogue circles with fluent and emerging speakers. The words were transcribed by a speaker fluent in both nêhiyawêwin and English. The results of those interviews were categorized thematically to conceptually actualize the spirit of the language, catalysts of disconnection to thespirit of the language, and community voices methods of reconnection to the spirit of the language. Results of these interviews vastly determine that the spirit of the language is drawn from the land. Although nêhiyawêwin is the focus of this work, Indigenous languages are by nature inherently related to the land. This is further reaffirmed by the Indigenous language learners and speakers who expressed having ancestries and lineages from multiple Indigenous communities. Several other key differences embody this spirit of the language, which include ceremony and spirituality, as well as the semantic worldviews tied to polysynthetic verb-oriented morphophonemics most often found in indigenous languages — and of focus, nêhiyawêwin. The catalysts of disconnection to the spirit of the language are those whose histories have severed connections between Indigenous Peoples and the spirit of their languages or those that have affected relationships with the land, ceremony, and ways of thinking. Results of this research and its literature review have determined the three most ubiquitously damaging interdependent factors, which are catalysts of disconnection from the spirit of the language as colonization, capitalism, and Christianity. As voiced by the Indigenous language learners, this work necessitates addressing means to reconnect to the spirit of the language. Interviewees mentioned that the process of reconnection involves a whole relationship with the land, the practice of reciprocal-relational methodologies for language learning, and indigenous-protected and -governed learning. This work concludes in support of those reconnection methodologies.

Keywords: indigenous language acquisition, indigenous language reclamation, indigenous language revitalization, nêhiyawêwin, spirit of the language

Procedia PDF Downloads 143
8062 The Use of Emerging Technologies in Higher Education Institutions: A Case of Nelson Mandela University, South Africa

Authors: Ayanda P. Deliwe, Storm B. Watson

Abstract:

The COVID-19 pandemic has disrupted the established practices of higher education institutions (HEIs). Most higher education institutions worldwide had to shift from traditional face-to-face to online learning. The online environment and new online tools are disrupting the way in which higher education is presented. Furthermore, the structures of higher education institutions have been impacted by rapid advancements in information and communication technologies. Emerging technologies should not be viewed in a negative light because, as opposed to the traditional curriculum that worked to create productive and efficient researchers, emerging technologies encourage creativity and innovation. Therefore, using technology together with traditional means will enhance teaching and learning. Emerging technologies in higher education not only change the experience of students, lecturers, and the content, but it is also influencing the attraction and retention of students. Higher education institutions are under immense pressure because not only are they competing locally and nationally, but emerging technologies also expand the competition internationally. Emerging technologies have eliminated border barriers, allowing students to study in the country of their choice regardless of where they are in the world. Higher education institutions are becoming indifferent as technology is finding its way into the lecture room day by day. Academics need to utilise technology at their disposal if they want to get through to their students. Academics are now competing for students' attention with social media platforms such as WhatsApp, Snapchat, Instagram, Facebook, TikTok, and others. This is posing a significant challenge to higher education institutions. It is, therefore, critical to pay attention to emerging technologies in order to see how they can be incorporated into the classroom in order to improve educational quality while remaining relevant in the work industry. This study aims to understand how emerging technologies have been utilised at Nelson Mandela University in presenting teaching and learning activities since April 2020. The primary objective of this study is to analyse how academics are incorporating emerging technologies in their teaching and learning activities. This primary objective was achieved by conducting a literature review on clarifying and conceptualising the emerging technologies being utilised by higher education institutions, reviewing and analysing the use of emerging technologies, and will further be investigated through an empirical analysis of the use of emerging technologies at Nelson Mandela University. Findings from the literature review revealed that emerging technology is impacting several key areas in higher education institutions, such as the attraction and retention of students, enhancement of teaching and learning, increase in global competition, elimination of border barriers, and highlighting the digital divide. The literature review further identified that learning management systems, open educational resources, learning analytics, and artificial intelligence are the most prevalent emerging technologies being used in higher education institutions. The identified emerging technologies will be further analysed through an empirical analysis to identify how they are being utilised at Nelson Mandela University.

Keywords: artificial intelligence, emerging technologies, learning analytics, learner management systems, open educational resources

Procedia PDF Downloads 69
8061 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 378
8060 Implementation of Dozer Push Measurement under Payment Mechanism in Mining Operation

Authors: Anshar Ajatasatru

Abstract:

The decline of coal prices over past years have been significantly increasing the awareness of effective mining operation. A viable step must be undertaken in becoming more cost competitive while striving for best mining practice especially at Melak Coal Mine in East Kalimantan, Indonesia. This paper aims to show how effective dozer push measurement method can be implemented as it is controlled by contract rate on the unit basis of USD ($) per bcm. The method emerges from an idea of daily dozer push activity that continually shifts the overburden until final target design by mine planning. Volume calculation is then performed by calculating volume of each time overburden is removed within determined distance using cut and fill method from a high precision GNSS system which is applied into dozer as a guidance to ensure the optimum result of overburden removal. Accumulation of daily to weekly dozer push volume is found 95 bcm which is multiplied by average sell rate of $ 0,95, thus the amount monthly revenue is $ 90,25. Furthermore, the payment mechanism is then based on push distance and push grade. The push distance interval will determine the rates that vary from $ 0,9 - $ 2,69 per bcm and are influenced by certain push slope grade from -25% until +25%. The amount payable rates for dozer push operation shall be specifically following currency adjustment and is to be added to the monthly overburden volume claim, therefore, the sell rate of overburden volume per bcm may fluctuate depends on the real time exchange rate of Jakarta Interbank Spot Dollar Rate (JISDOR). The result indicates that dozer push measurement can be one of the surface mining alternative since it has enabled to refine method of work, operating cost and productivity improvement apart from exposing risk of low rented equipment performance. In addition, payment mechanism of contract rate by dozer push operation scheduling will ultimately deliver clients by almost 45% cost reduction in the form of low and consistent cost.

Keywords: contract rate, cut-fill method, dozer push, overburden volume

Procedia PDF Downloads 316
8059 Active Development of Tacit Knowledge Using Social Media and Learning Communities

Authors: John Zanetich

Abstract:

This paper uses a pragmatic research approach to investigate the relationships between Active Development of Tacit Knowledge (ADTK), social media (Facebook) and classroom learning communities. This paper investigates the use of learning communities and social media as the context and means for changing tacit knowledge to explicit and presents a dynamic model of the development of a classroom learning community. The goal of this study is to identify the point that explicit knowledge is converted to tacit knowledge and to test a way to quantify the exchange using social media and learning communities.

Keywords: tacit knowledge, knowledge management, college programs, experiential learning, learning communities

Procedia PDF Downloads 361
8058 Pegylated Liposomes of Trans Resveratrol, an Anticancer Agent, for Enhancing Therapeutic Efficacy and Long Circulation

Authors: M. R. Vijayakumar, Sanjay Kumar Singh, Lakshmi, Hithesh Dewangan, Sanjay Singh

Abstract:

Trans resveratrol (RES) is a natural molecule proved for cancer preventive and therapeutic activities devoid of any potential side effects. However, the therapeutic application of RES in disease management is limited because of its rapid elimination from blood circulation thereby low biological half life in mammals. Therefore, the main objective of this study is to enhance the circulation as well as therapeutic efficacy using PEGylated liposomes. D-α-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS) is applied as steric surface decorating agent to prepare RES liposomes by thin film hydration method. The prepared nanoparticles were evaluated by various state of the art techniques such as dynamic light scattering (DLS) technique for particle size and zeta potential, TEM for shape, differential scanning calorimetry (DSC) for interaction analysis and XRD for crystalline changes of drug. Encapsulation efficiency and invitro drug release were determined by dialysis bag method. Cancer cell viability studies were performed by MTT assay, respectively. Pharmacokinetic studies were performed in sprague dawley rats. The prepared liposomes were found to be spherical in shape. Particle size and zeta potential of prepared formulations varied from 64.5±3.16 to 262.3±7.45 nm and -2.1 to 1.76 mV, respectively. DSC study revealed absence of potential interaction. XRD study revealed presence of amorphous form in liposomes. Entrapment efficiency was found to be 87.45±2.14 % and the drug release was found to be controlled up to 24 hours. Minimized MEC in MTT assay and tremendous enhancement in circulation time of RES PEGylated liposomes than its pristine form revealed that the stearic stabilized PEGylated liposomes can be an alternative tool to commercialize this molecule for chemopreventive and therapeutic applications in cancer.

Keywords: trans resveratrol, cancer nanotechnology, long circulating liposomes, bioavailability enhancement, liposomes for cancer therapy, PEGylated liposomes

Procedia PDF Downloads 589
8057 Knowledge Management: Why is So Difficult? From “A Good Idea” to Organizational Contribute

Authors: Lisandro Blas, Héctor Tamanini

Abstract:

From earliest 90 to now, no many companies or organization can “really” implement a knowledge management (KM) system that works (no only viewed from a measurement model, but in this continuity). Which are the reasons of that? Some of the reason maybe could be embedded in how KM is demanded (usefulness, priority, experts, a definition of KM) vs the importance and resources that the organizations afford (budget, responsible of a specific area of KM, intangibility). Many organizations “claim” the importance of Knowledge Management but thhese demands are not reflecting these claims in their future actions. With another’s tools or managements ideas the organizations put the economics and human resources to work. Why it´s not occur in KM? This paper tray to explain some of this reasons and tray to deal with this situations through a survey done in 2011 for a IAPG (Argentinean Institute from Oil & Gas) Congress.

Keywords: knowledge management into organizations, new perspectives, failure in implementation, claim

Procedia PDF Downloads 421
8056 Clusterization Probability in 14N Nuclei

Authors: N. Burtebayev, Sh. Hamada, Zh. Kerimkulov, D. K. Alimov, A. V. Yushkov, N. Amangeldi, A. N. Bakhtibaev

Abstract:

The main aim of the current work is to examine if 14N is candidate to be clusterized nuclei or not. In order to check this attendance, we have measured the angular distributions for 14N ion beam elastically scattered on 12C target nuclei at different low energies; 17.5, 21, and 24.5MeV which are close to the Coulomb barrier energy for 14N+12C nuclear system. Study of various transfer reactions could provide us with useful information about the attendance of nuclei to be in a composite form (core + valence). The experimental data were analyzed using two approaches; Phenomenological (Optical Potential) and semi-microscopic (Double Folding Potential). The agreement between the experimental data and the theoretical predictions is fairly good in the whole angular range.

Keywords: deuteron transfer, elastic scattering, optical model, double folding, density distribution

Procedia PDF Downloads 327
8055 Identification and Selection of a Supply Chain Target Process for Re-Design

Authors: Jaime A. Palma-Mendoza

Abstract:

A supply chain consists of different processes and when conducting supply chain re-design is necessary to identify the relevant processes and select a target for re-design. A solution was developed which consists to identify first the relevant processes using the Supply Chain Operations Reference (SCOR) model, then to use Analytical Hierarchy Process (AHP) for target process selection. An application was conducted in an Airline MRO supply chain re-design project which shows this combination can clearly aid the identification of relevant supply chain processes and the selection of a target process for re-design.

Keywords: decision support systems, multiple criteria analysis, supply chain management

Procedia PDF Downloads 492
8054 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 107
8053 Nanomaterials Based Biosensing Chip for Non-Invasive Detection of Oral Cancer

Authors: Suveen Kumar

Abstract:

Oral cancer (OC) is the sixth most death causing cancer in world which includes tumour of lips, floor of the mouth, tongue, palate, cheeks, sinuses, throat, etc. Conventionally, the techniques used for OC detection are toluidine blue staining, biopsy, liquid-based cytology, visual attachments, etc., however these are limited by their highly invasive nature, low sensitivity, time consumption, sophisticated instrument handling, sample processing and high cost. Therefore, we developed biosensing chips for non-invasive detection of OC via CYFRA-21-1 biomarker. CYFRA-21-1 (molecular weight: 40 kDa) is secreted in saliva of OC patients which is a non-invasive biological fluid with a cut-off value of 3.8 ng mL-1, above which the subjects will be suffering from oral cancer. Therefore, in first work, 3-aminopropyl triethoxy silane (APTES) functionalized zirconia (ZrO2) nanoparticles (APTES/nZrO2) were used to successfully detect CYFRA-21-1 in a linear detection range (LDR) of 2-16 ng mL-1 with sensitivity of 2.2 µA mL ng-1. Successively, APTES/nZrO2-RGO was employed to prevent agglomeration of ZrO2 by providing high surface area reduced graphene oxide (RGO) support and much wider LDR (2-22 ng mL-1) was obtained with remarkable limit of detection (LOD) as 0.12 ng mL-1. Further, APTES/nY2O3/ITO platform was used for oral cancer bioseneor development. The developed biosensor (BSA/anti-CYFRA-21-1/APTES/nY2O3/ITO) have wider LDR (0.01-50 ng mL-1) with remarkable limit of detection (LOD) as 0.01 ng mL-1. To improve the sensitivity of the biosensing platform, nanocomposite of yattria stabilized nanostructured zirconia-reduced graphene oxide (nYZR) based biosensor has been developed. The developed biosensing chip having ability to detect CYFRA-21-1 biomolecules in the range of 0.01-50 ng mL-1, LOD of 7.2 pg mL-1 with sensitivity of 200 µA mL ng-1. Further, the applicability of the fabricated biosensing chips were also checked through real sample (saliva) analysis of OC patients and the obtained results showed good correlation with the standard protein detection enzyme linked immunosorbent assay (ELISA) technique.

Keywords: non-invasive, oral cancer, nanomaterials, biosensor, biochip

Procedia PDF Downloads 127