Search results for: granular computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1198

Search results for: granular computing

688 The Chemical Transport Mechanism of Emitter Micro-Particles in Tungsten Electrode: A Metallurgical Study

Authors: G. Singh, H.Schuster, U. Füssel

Abstract:

The stability of electric arc and durability of electrode tip used in Tungsten Inert Gas (TIG) welding demand a metallurgical study about the chemical transport mechanism of emitter oxide particles in tungsten electrode during its real welding conditions. The tungsten electrodes doped with emitter oxides of rare earth oxides such as La₂O₃, Th₂O₃, Y₂O₃, CeO₂ and ZrO₂ feature a comparatively lower work function than tungsten and thus have superior emission characteristics due to lesser surface temperature of the cathode. The local change in concentration of these emitter particles in tungsten electrode due to high temperature diffusion (chemical transport) can change its functional properties like electrode temperature, work function, electron emission, and stability of the electrode tip shape. The resulting increment in tip surface temperature results in the electrode material loss. It was also observed that the tungsten recrystallizes to large grains at high temperature. When the shape of grain boundaries are granular in shape, the intergranular diffusion of oxide emitter particles takes more time to reach the electrode surface. In the experimental work, the microstructure of the used electrode's tip surface will be studied by scanning electron microscope and reflective X-ray technique in order to gauge the extent of the diffusion and chemical reaction of emitter particles. Besides, a simulated model is proposed to explain the effect of oxide particles diffusion on the electrode’s microstructure, electron emission characteristics, and electrode tip erosion. This model suggests metallurgical modifications in tungsten electrode to enhance its erosion resistance.

Keywords: rare-earth emitter particles, temperature-dependent diffusion, TIG welding, Tungsten electrode

Procedia PDF Downloads 186
687 Primary Melanocytic Tumors of the Central Nervous System: A Clinico-Pathological Study of Seven Cases

Authors: Sushila Jaiswal, Awadhesh Kumar Jaiswal

Abstract:

Background: Primary melanocytic tumors of the central nervous system (CNS) are uncommon lesions and arise from the melanocytes located within the leptomeninges. Aim and objective: The aim of the study was to evaluate the clinical details, histomorphology of the primary melanocytic tumor of CNS. Method: The study was performed by the retrospective review of the case records of the primary melanocytic tumors of CNS diagnosed in our department. The formalin-fixed, paraffin embedded tissue blocks and tissue sections were retrieved and reviewed. Results: Seven cases (6 males, 1 female; age range- 16-40 years; mean age- 27 years) of primary melanocytic tumors of CNS were retrieved over last seven years. The tumor was intracranial (n=5; frontal – 1 case, parietal – 1 case, cerebello-pontine angle- 1 case, occipital -1 case, foramen magnum-1 case) and intra spinal (n=2; cervical – 2 cases). All patients presented with the neurological deficits related to the location of the tumor. Four cases were malignant melanoma; two were melanocytoma of intermediate grade and remaining one was melanocytoma. On histopathology, melanocytoma and melanoma both displayed sheets of well-differentiated melanocytes having round to oval nuclei with finely dispersed chromatin, occasional single eosinophilic nucleoli and a moderate amount of cytoplasm with abundant granular melanin pigment. The absence of mitosis and macronucleoli was noticed in melanocytoma while melanoma showed frequent mitosis and macronucleoli. On immunohistochemistry, both showed diffuse strong HMB45 and S-100 immunopositivity. Conclusion: Primary melanocytic tumors of CNS are rare and predominantly seen in males. It is important to differentiate melanoma from melanocytoma as prognosis of later is good.

Keywords: melanocytoma, melanoma, brain tumor, melanin

Procedia PDF Downloads 233
686 Developing a Framework for Open Source Software Adoption in a Higher Education Institution in Uganda. A case of Kyambogo University

Authors: Kafeero Frank

Abstract:

This study aimed at developing a frame work for open source software adoption in an institution of higher learning in Uganda, with the case of KIU as a study area. There were mainly four research questions based on; individual staff interaction with open source software forum, perceived FOSS characteristics, organizational characteristics and external characteristics as factors that affect open source software adoption. The researcher used causal-correlation research design to study effects of these variables on open source software adoption. A quantitative approach was used in this study with self-administered questionnaire on a purposively and randomly sampled sample of university ICT staff. Resultant data was analyzed using means, correlation coefficients and multivariate multiple regression analysis as statistical tools. The study reveals that individual staff interaction with open source software forum and perceived FOSS characteristics were the primary factors that significantly affect FOSS adoption while organizational and external factors were secondary with no significant effect but significant correlation to open source software adoption. It was concluded that for effective open source software adoption to occur there must be more effort on primary factors with subsequent reinforcement of secondary factors to fulfill the primary factors and adoption of open source software. Lastly recommendations were made in line with conclusions for coming up with Kyambogo University frame work for open source software adoption in institutions of higher learning. Areas of further research recommended include; Stakeholders’ analysis of open source software adoption in Uganda; Challenges and way forward. Evaluation of Kyambogo University frame work for open source software adoption in institutions of higher learning. Framework development for cloud computing adoption in Ugandan universities. Framework for FOSS development in Uganda IT industry

Keywords: open source software., organisational characteristics, external characteristics, cloud computing adoption

Procedia PDF Downloads 72
685 A Survey on Constraint Solving Approaches Using Parallel Architectures

Authors: Nebras Gharbi, Itebeddine Ghorbel

Abstract:

In the latest years and with the advancements of the multicore computing world, the constraint programming community tried to benefit from the capacity of new machines and make the best use of them through several parallel schemes for constraint solving. In this paper, we propose a survey of the different proposed approaches to solve Constraint Satisfaction Problems using parallel architectures. These approaches use in a different way a parallel architecture: the problem itself could be solved differently by several solvers or could be split over solvers.

Keywords: constraint programming, parallel programming, constraint satisfaction problem, speed-up

Procedia PDF Downloads 319
684 Discerning Divergent Nodes in Social Networks

Authors: Mehran Asadi, Afrand Agah

Abstract:

In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.

Keywords: online social networks, data mining, social cloud computing, interaction and collaboration

Procedia PDF Downloads 157
683 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 382
682 Reuse of Wastewater After Pretreatment Under Teril and Sand in Bechar City

Authors: Sara Seddiki, Maazouzi Abdelhak

Abstract:

The main objective of this modest work is to follow the physicochemical and bacteriological evolution of the wastewater from the town of Bechar subjected to purification by filtration according to various local supports, namely Sable and Terrill by reducing nuisances that undergo the receiving environment (Oued Bechar) and therefore make this water source reusable in different areas. The study first made it possible to characterize the urban wastewater of the Bechar wadi, which presents an environmental threat, thus allowing an estimation of the pollutant load, the chemical oxygen demand COD (145 mg / l) and the biological oxygen demand BOD5 (72 mg / l) revealed that these waters are less biodegradable (COD / BOD5 ratio = 0.62), have a fairly high conductivity (2.76 mS/cm), and high levels of mineral matter presented by chlorides and sulphates 390 and 596.1 mg / l respectively, with a pH of 8.1. The characterization of the sand dune (Beni Abbes) shows that quartz (97%) is the most present mineral. The granular analysis allowed us to determine certain parameters like the uniformity coefficient (CU) and the equivalent diameter, and scanning electron microscope (SEM) observations and X-ray analysis were performed. The study of filtered wastewater shows satisfactory and very encouraging treatment results, with complete elimination of total coliforms and streptococci and a good reduction of total aerobic germs in the sand and clay-sand filter. A good yield has been reported in the sand Terrill filter for the reduction of turbidity. The rates of reduction of organic matter in terms of the biological oxygen demand, in chemical oxygen demand recorded, are of the order of 60%. The elimination of sulphates is 40% for the sand filter.

Keywords: urban wastewater, filtration, bacteriological and physicochemical parameters, sand, Terrill, Oued Bechar

Procedia PDF Downloads 95
681 Landslide Susceptibility Mapping Using Soft Computing in Amhara Saint

Authors: Semachew M. Kassa, Africa M Geremew, Tezera F. Azmatch, Nandyala Darga Kumar

Abstract:

Frequency ratio (FR) and analytical hierarchy process (AHP) methods are developed based on past landslide failure points to identify the landslide susceptibility mapping because landslides can seriously harm both the environment and society. However, it is still difficult to select the most efficient method and correctly identify the main driving factors for particular regions. In this study, we used fourteen landslide conditioning factors (LCFs) and five soft computing algorithms, including Random Forest (RF), Support Vector Machine (SVM), Logistic Regression (LR), Artificial Neural Network (ANN), and Naïve Bayes (NB), to predict the landslide susceptibility at 12.5 m spatial scale. The performance of the RF (F1-score: 0.88, AUC: 0.94), ANN (F1-score: 0.85, AUC: 0.92), and SVM (F1-score: 0.82, AUC: 0.86) methods was significantly better than the LR (F1-score: 0.75, AUC: 0.76) and NB (F1-score: 0.73, AUC: 0.75) method, according to the classification results based on inventory landslide points. The findings also showed that around 35% of the study region was made up of places with high and very high landslide risk (susceptibility greater than 0.5). The very high-risk locations were primarily found in the western and southeastern regions, and all five models showed good agreement and similar geographic distribution patterns in landslide susceptibility. The towns with the highest landslide risk include Amhara Saint Town's western part, the Northern part, and St. Gebreal Church villages, with mean susceptibility values greater than 0.5. However, rainfall, distance to road, and slope were typically among the top leading factors for most villages. The primary contributing factors to landslide vulnerability were slightly varied for the five models. Decision-makers and policy planners can use the information from our study to make informed decisions and establish policies. It also suggests that various places should take different safeguards to reduce or prevent serious damage from landslide events.

Keywords: artificial neural network, logistic regression, landslide susceptibility, naïve Bayes, random forest, support vector machine

Procedia PDF Downloads 82
680 Synthesis of Highly Porous Cyclowollastonite Bioactive Ceramic

Authors: Mehieddine Bouatrous

Abstract:

Recently bioactive ceramic materials have been applied in the biomedical field as bulk, granular, or coating materials for more than half a century. More recently, bone tissue engineering scaffolds made of highly porous bioactive ceramic, glass-ceramic, and composite materials have also been created. As a result, recent bioactive ceramic structures have a high bioactivity rate, an open pores network, and good mechanical characteristics simulating cortical bone. Cyclowollastonite frameworks are also suggested for use as a graft material. As a porogenous agent, various amounts of the polymethyl methacrylate (PMMA) powders were used in this study successfully to synthesize a highly interrelated, nanostructured porous cyclowollastonite with a large specific surface area where the morphology and porosity were investigated. Porous cyclowollastonite bioactive ceramics were synthesized with a cost-effective and eco-friendly wet chemical method. The synthesized biomaterial is bioactive according to in vitro tests and can be used for bone tissue engineering scaffolds where cyclowollastonite sintered dense discs were submerged in simulated body fluid (S.B.F.) for various periods of time (1-4 weeks), resulting in the formation of a dense and consistent layer of hydroxyapatite on the surface of the ceramics, indicating its good in vitro bioactivity. Therefore, the cyclowollastonite framework exhibits good in vitro bioactivity due to its highly interconnecting porous structure and open macropores. The results demonstrate that even after soaking for several days, the surface of cyclowollastonite ceramic can generate a dense and consistent layer of hydroxyapatite. The results showed that cyclowollastonite framework exhibits good in vitro bioactivity due to highly interconnecting porous structure and open macropores.

Keywords: porous, bioactive, biomaterials, S.B.F, cyclowollastonite, biodegradability

Procedia PDF Downloads 77
679 Physicochemical and Thermal Characterization of Starch from Three Different Plantain Cultivars in Puerto Rico

Authors: Carmen E. Pérez-Donado, Fernando Pérez-Muñoz, Rosa N. Chávez-Jáuregui

Abstract:

Plantain contains starch as the majority component and represents a relevant source of this carbohydrate. Starches from different cultivars of plantain and bananas have been studied for industrialization purposes due to their morphological and thermal characteristics and their influence on food products. This study aimed to characterize the physical, chemical, and thermal properties of starch from three different plantains cultivated in Puerto Rico: Maricongo, Maiden, and FHIA 20. Amylose and amylopectin content, color, granular size, morphology, and thermal properties were determined. According to the content of amylose in starches, FHIA 20 starch presented minor content of the three cultivars studied. In terms of color, Maiden and FHIA 20 starch exhibited a significantly higher whiteness index comparing their values with Maricongo starch. The starches of the three cultivars had an elongated-ovoid morphology, with a smooth surface and a non-porous appearance. Regardless of similarities in their morphology, FHIA 20 showed a lower aspect ratio, which meant that their granules tended to be more elongated granules. Comparing the thermal properties of starches, it was found that the initial gelatinization temperature of the starch of the cultivars was similar. However, the final gelatinization temperatures of the starches belonging to the cultivars Maricongo (79.69°C) and Maiden (77.40°C) were similar, whereas FHIA 20 starch presented a noticeably higher final gelatinization temperature (87.95°C) and transition enthalpy. Despite source similarities, starches from plantain cultivars showed differences in their composition and thermal behavior. Therefore, this represents an opportunity to diversify their use in food-related applications.

Keywords: aspect ratio, morphology, Musa spp., starch, thermal properties

Procedia PDF Downloads 265
678 Potential Risks of Using Disconnected Composite Foundation Systems in Active Seismic Zones

Authors: Mohamed ElMasry, Ahmad Ragheb, Tareq AbdelAziz, Mohamed Ghazy

Abstract:

Choosing the suitable infrastructure system is becoming more challenging with the increase in demand for heavier structures contemporarily. This is the case where piled raft foundations have been widely used around the world to support heavy structures without extensive settlement. In the latter system, piles are rigidly connected to the raft, and most of the load goes to the soil layer on which the piles are bearing. In spite of that, when soil profiles contain thicker soft clay layers near the surface, or at relatively shallow depths, it is unfavorable to use the rigid piled raft foundation system. Consequently, the disconnected piled raft system was introduced as an alternative approach for the rigidly connected system. In this system, piles are disconnected from the raft using a cushion of soil, mostly of a granular interlayer. The cushion is used to redistribute the stresses among the piles and the subsoil. Piles are also used to stiffen the subsoil, and by this way reduce the settlement without being rigidly connected to the raft. However, the seismic loading effect on such disconnected foundation systems remains a problem, since the soil profiles may include thick clay layers which raise risks of amplification of the dynamic earthquake loads. In this paper, the effects of seismic behavior on the connected and disconnected piled raft systems are studied through a numerical model using Midas GTS NX Software. The study concerns the soil-structure interaction and the expected behavior of the systems. Advantages and disadvantages of each foundation approach are studied, and a comparison between the results are presented to show the effects of using disconnected piled raft systems in highly seismic zones. This was done by showing the excitation amplification in each of the foundation systems.

Keywords: soil-structure interaction, disconnected piled-raft, risks, seismic zones

Procedia PDF Downloads 265
677 Identification of Clay Mineral for Determining Reservoir Maturity Levels Based on Petrographic Analysis, X-Ray Diffraction and Porosity Test on Penosogan Formation Karangsambung Sub-District Kebumen Regency Central Java

Authors: Ayu Dwi Hardiyanti, Bernardus Anggit Winahyu, I. Gusti Agung Ayu Sugita Sari, Lestari Sutra Simamora, I. Wayan Warmada

Abstract:

The Penosogan Formation sandstone, that has Middle Miosen age, has been deemed as a reservoir potential based on sample data from sandstone outcrop in Kebakalan and Kedawung villages, Karangsambung sub-district, Kebumen Regency, Central Java. This research employs the following analytical methods; petrography, X-ray diffraction (XRD), and porosity test. Based on the presence of micritic sandstone, muddy micrite, and muddy sandstone, the Penosogan Formation sandstone has a fine-coarse granular size and middle-to-fine sorting. The composition of the sandstone is mostly made up of plagioclase, skeletal grain, and traces of micrite. The percentage of clay minerals based on petrographic analysis is 10% and appears to envelop grain, resulting enveloping grain which reduces the porosity of rocks. The porosity types as follows: interparticle, vuggy, channel, and shelter, with an equant form of cement. Moreover, the diagenesis process involves compaction, cementation, authigenic mineral growth, and dissolving due to feldspar alteration. The maturity of the reservoir can be seen through the X-ray diffraction analysis results, using ethylene glycol solution for clay minerals fraction transformed from smectite–illite. Porosity test analysis showed that the Penosogan Formation sandstones has a porosity value of 22% based on the Koeseomadinata classification, 1980. That shows high maturity is very influential for the quality of reservoirs sandstone of the Penosogan Formation.

Keywords: sandstone reservoir, Penosogan Formation, smectite, XRD

Procedia PDF Downloads 174
676 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston

Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando

Abstract:

The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.

Keywords: computing paradigm, emerging technologies, equitable healthcare, healthy cities, open data, smart city, social innovation

Procedia PDF Downloads 336
675 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method

Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David

Abstract:

Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.

Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height

Procedia PDF Downloads 209
674 Effects of Seed Culture and Attached Growth System on the Performance of Anammox Hybrid Reactor (AHR) Treating Nitrogenous Wastewater

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

The start-up of anammox (anaerobic ammonium oxidation) process in hybrid reactor delineated four distinct phases i.e. cell lysis, lag phase, activity elevation and stationary phase. Cell lysis phase was marked by death and decay of heterotrophic denitrifiers resulting in breakdown of organic nitrogen into ammonium. Lag phase showed initiation of anammox activity with turnover of heterotrophic denitrifiers, which is evident from appearance of NO3-N in the effluent. In activity elevation phase, anammox became the dominant reaction, which can be attributed to consequent reduction of NH4-N into N2 with increased NO3-N in the effluent. Proper selection of mixed seed culture at influent NO2-/NH4+ ratio (1:1) and hydraulic retention time (HRT) of 1 day led to early startup of anammox within 70 days. Pseudo steady state removal efficiencies of NH4+ and NO2- were found as 94.3% and 96.4% respectively, at nitrogen loading rate (NLR) of 0.35 kg N/m3d at an HRT of 1 day. Analysis of the data indicated that attached growth system contributes an additional 11% increase in the ammonium removal and results in an average of 29% reduction in sludge washout rate. Mass balance study of nitrogen indicated that 74.1% of total input nitrogen is converted into N2 gas followed by 11.2% being utilized in biomass development. Scanning electron microscope (SEM) observation of the granular sludge clearly showed the presence of cocci and rod shaped microorganisms intermingled on the external surface of the granules. The average size of anammox granules (1.2-1.5 mm) with an average settling velocity of 45.6 m/h indicated a high degree of granulation resulting into formation of well compacted granules in the anammox process.

Keywords: anammox, hybrid reactor, startup, granulation, nitrogen removal, mixed seed culture

Procedia PDF Downloads 184
673 A Policy Strategy for Building Energy Data Management in India

Authors: Shravani Itkelwar, Deepak Tewari, Bhaskar Natarajan

Abstract:

The energy consumption data plays a vital role in energy efficiency policy design, implementation, and impact assessment. Any demand-side energy management intervention's success relies on the availability of accurate, comprehensive, granular, and up-to-date data on energy consumption. The Building sector, including residential and commercial, is one of the largest consumers of energy in India after the Industrial sector. With economic growth and increasing urbanization, the building sector is projected to grow at an unprecedented rate, resulting in a 5.6 times escalation in energy consumption till 2047 compared to 2017. Therefore, energy efficiency interventions will play a vital role in decoupling the floor area growth and associated energy demand, thereby increasing the need for robust data. In India, multiple institutions are involved in the collection and dissemination of data. This paper focuses on energy consumption data management in the building sector in India for both residential and commercial segments. It evaluates the robustness of data available through administrative and survey routes to estimate the key performance indicators and identify critical data gaps for making informed decisions. The paper explores several issues in the data, such as lack of comprehensiveness, non-availability of disaggregated data, the discrepancy in different data sources, inconsistent building categorization, and others. The identified data gaps are justified with appropriate examples. Moreover, the paper prioritizes required data in order of relevance to policymaking and groups it into "available," "easy to get," and "hard to get" categories. The paper concludes with recommendations to address the data gaps by leveraging digital initiatives, strengthening institutional capacity, institutionalizing exclusive building energy surveys, and standardization of building categorization, among others, to strengthen the management of building sector energy consumption data.

Keywords: energy data, energy policy, energy efficiency, buildings

Procedia PDF Downloads 185
672 Cognitive Footprints: Analytical and Predictive Paradigm for Digital Learning

Authors: Marina Vicario, Amadeo Argüelles, Pilar Gómez, Carlos Hernández

Abstract:

In this paper, the Computer Research Network of the National Polytechnic Institute of Mexico proposes a paradigmatic model for the inference of cognitive patterns in digital learning systems. This model leads to metadata architecture useful for analysis and prediction in online learning systems; especially on MOOc's architectures. The model is in the design phase and expects to be tested through an institutional of courses project which is going to develop for the MOOc.

Keywords: cognitive footprints, learning analytics, predictive learning, digital learning, educational computing, educational informatics

Procedia PDF Downloads 477
671 An Intelligent Cloud Radio Access Network (RAN) Architecture for Future 5G Heterogeneous Wireless Network

Authors: Jin Xu

Abstract:

5G network developers need to satisfy the necessary requirements of additional capacity from massive users and spectrally efficient wireless technologies. Therefore, the significant amount of underutilized spectrum in network is motivating operators to combine long-term evolution (LTE) with intelligent spectrum management technology. This new LTE intelligent spectrum management in unlicensed band (LTE-U) has the physical layer topology to access spectrum, specifically the 5-GHz band. We proposed a new intelligent cloud RAN for 5G.

Keywords: cloud radio access network, wireless network, cloud computing, multi-agent

Procedia PDF Downloads 424
670 Direct Approach in Modeling Particle Breakage Using Discrete Element Method

Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang

Abstract:

Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.

Keywords: particle bed, breakage models, breakage kinetic, discrete element method

Procedia PDF Downloads 199
669 Distributed Key Management With Less Transmitted Messaged In Rekeying Process To Secure Iot Wireless Sensor Networks In Smart-Agro

Authors: Safwan Mawlood Hussien

Abstract:

Internet of Things (IoT) is a promising technology has received considerable attention in different fields such as health, industry, defence, and agro, etc. Due to the limitation capacity of computing, storage, and communication, IoT objects are more vulnerable to attacks. Many solutions have been proposed to solve security issues, such as key management using symmetric-key ciphers. This study provides a scalable group distribution key management based on ECcryptography; with less transmitted messages The method has been validated through simulations in OMNeT++.

Keywords: elliptic curves, Diffie–Hellman, discrete logarithm problem, secure key exchange, WSN security, IoT security, smart-agro

Procedia PDF Downloads 119
668 The On-Board Critical Message Transmission Design for Navigation Satellite Delay/Disruption Tolerant Network

Authors: Ji-yang Yu, Dan Huang, Guo-ping Feng, Xin Li, Lu-yuan Wang

Abstract:

The navigation satellite network, especially the Beidou MEO Constellation, can relay data effectively with wide coverage and is applied in navigation, detection, and position widely. But the constellation has not been completed, and the amount of satellites on-board is not enough to cover the earth, which makes the data-relay disrupted or delayed in the transition process. The data-relay function needs to tolerant the delay or disruption in some extension, which make the Beidou MEO Constellation a delay/disruption-tolerant network (DTN). The traditional DTN designs mainly employ the relay table as the basic of data path schedule computing. But in practical application, especially in critical condition, such as the war-time or the infliction heavy losses on the constellation, parts of the nodes may become invalid, then the traditional DTN design could be useless. Furthermore, when transmitting the critical message in the navigation system, the maximum priority strategy is used, but the nodes still inquiry the relay table to design the path, which makes the delay more than minutes. Under this circumstances, it needs a function which could compute the optimum data path on-board in real-time according to the constellation states. The on-board critical message transmission design for navigation satellite delay/disruption-tolerant network (DTN) is proposed, according to the characteristics of navigation satellite network. With the real-time computation of parameters in the network link, the least-delay transition path is deduced to retransmit the critical message in urgent conditions. First, the DTN model for constellation is established based on the time-varying matrix (TVM) instead of the time-varying graph (TVG); then, the least transition delay data path is deduced with the parameters of the current node; at last, the critical message transits to the next best node. For the on-board real-time computing, the time delay and misjudges of constellation states in ground stations are eliminated, and the residual information channel for each node can be used flexibly. Compare with the minute’s delay of traditional DTN; the proposed transmits the critical message in seconds, which improves the re-transition efficiency. The hardware is implemented in FPGA based on the proposed model, and the tests prove the validity.

Keywords: critical message, DTN, navigation satellite, on-board, real-time

Procedia PDF Downloads 343
667 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: affective computing, emotion recognition, humanoid robot, human-robot-interaction (HRI), social robots

Procedia PDF Downloads 235
666 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization

Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın

Abstract:

There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.

Keywords: aircraft, fatigue, joint, life, optimization, prediction.

Procedia PDF Downloads 175
665 Impact of Heat Moisture Treatment on the Yield of Resistant Starch and Evaluation of Functional Properties of Modified Mung Bean (Vigna radiate) Starch

Authors: Sreejani Barua, P. P. Srivastav

Abstract:

Formulation of new functional food products for diabetes patients and obsessed people is a challenge for food industries till date. Starch is a certainly happening, ecological, reasonable and profusely obtainable polysaccharide in plant material. In the present scenario, there is a great interest in modifying starch functional properties without destroying its granular structure using different modification techniques. Resistant starch (RS) contains almost zero calories and can control blood glucose level to prevent diabetes. The current study focused on modification of mung bean starch which is a good source of legumes carbohydrate for the production of functional food. Heat moisture treatment (HMT) of mung starch was conducted at moisture content of 10-30%, temperature of 80-120 °C and time of 8-24 h.The content of resistant starch after modification was significantly increased from native starches containing RS 7.6%. The design combinations of HMT had been completed through Central Composite Rotatable Design (CCRD). The effects of HMT process variables on the yield of resistant starch was studied through Rapid Surface Methodology (RSM). The highest increase of resistant starch was found up to 34.39% when treated the native starch with 30% m.c at 120 °C temperature for 24 h.The functional properties of both native and modified mung bean starches showed that there was a reduction in the swelling power and swelling volume of HMT starches. However, the solubility of the HMT starches was higher than that of untreated native starch and also observed change in structural (scanning electron microscopy), X-Ray diffraction (XRD) pattern, blue value and thermal (differential scanning calorimetry) properties. Therefore, replacing native mung bean starch with heat-moisture treated mung bean starch leads to the development of new products with higher resistant starch levels and functional properties.

Keywords: Mung bean starch, heat moisture treatment, functional properties, resistant starch

Procedia PDF Downloads 202
664 Design of Low-Cost Water Purification System Using Activated Carbon

Authors: Nayan Kishore Giri, Ramakar Jha

Abstract:

Water is a major element for the life of all the mankind in the earth. India’s surface water flows through fourteen major streams. Indian rivers are the main source of potable water in India. In the eastern part of India many toxic hazardous metals discharged into the river from mining industries, which leads many deadly diseases to human being. So the potable water quality is very significant and vital concern at present as it is related with the present and future health perspective of the human race. Consciousness of health risks linked with unsafe water is still very low among the many rural and urban areas in India. Only about 7% of total Indian people using water purifier. This unhealthy situation of water is not only present in India but also present in many underdeveloped countries. The major reason behind this is the high cost of water purifier. This current study geared towards development of economical and efficient technology for the removal of maximum possible toxic metals and pathogen bacteria. The work involves the design of portable purification system and purifying material. In this design Coconut shell granular activated carbon(GAC) and polypropylene filter cloths were used in this system. The activated carbon is impregnated with Iron(Fe). Iron is used because it enhances the adsorption capacity of activated carbon. The thorough analysis of iron impregnated activated carbon(Fe-AC) is done by Scanning Electron Microscope (SEM), X-ray diffraction (XRD) , BET surface area test were done. Then 10 ppm of each toxic metal were infiltrated through the designed purification system and they were analysed in Atomic absorption spectrum (AAS). The results are very promising and it is low cost. This work will help many people who are in need of potable water. They can be benefited for its affordability. It could be helpful in industries and other domestic usage.

Keywords: potable water, coconut shell GAC, polypropylene filter cloths, SEM, XRD, BET, AAS

Procedia PDF Downloads 379
663 A Rational Intelligent Agent to Promote Metacognition a Situation of Text Comprehension

Authors: Anass Hsissi, Hakim Allali, Abdelmajid Hajami

Abstract:

This article presents the results of a doctoral research which aims to integrate metacognitive dimension in the design of human learning computing environments (ILE). We conducted a detailed study on the relationship between metacognitive processes and learning, specifically their positive impact on the performance of learners in the area of reading comprehension. Our contribution is to implement methods, using an intelligent agent based on BDI paradigm to ensure intelligent and reliable support for low readers, in order to encourage regulation and a conscious and rational use of their metacognitive abilities.

Keywords: metacognition, text comprehension EIAH, autoregulation, BDI agent

Procedia PDF Downloads 321
662 Biogas Enhancement Using Iron Oxide Nanoparticles and Multi-Wall Carbon Nanotubes

Authors: John Justo Ambuchi, Zhaohan Zhang, Yujie Feng

Abstract:

Quick development and usage of nanotechnology have resulted to massive use of various nanoparticles, such as iron oxide nanoparticles (IONPs) and multi-wall carbon nanotubes (MWCNTs). Thus, this study investigated the role of IONPs and MWCNTs in enhancing bioenergy recovery. Results show that IONPs at a concentration of 750 mg/L and MWCNTs at a concentration of 1500 mg/L induced faster substrate utilization and biogas production rates than the control. IONPs exhibited higher carbon oxygen demand (COD) removal efficiency than MWCNTs while on the contrary, MWCNT performance on biogas generation was remarkable than IONPs. Furthermore, scanning electron microscopy (SEM) investigation revealed extracellular polymeric substances (EPS) excretion from AGS had an interaction with nanoparticles. This interaction created a protective barrier to microbial consortia hence reducing their cytotoxicity. Microbial community analyses revealed genus predominance of bacteria of Anaerolineaceae and Longilinea. Their role in biodegradation of the substrate could have highly been boosted by nanoparticles. The archaea predominance of the genus level of Methanosaeta and Methanobacterium enhanced methanation process. The presence of bacteria of genus Geobacter was also reported. Their presence might have significantly contributed to direct interspecies electron transfer in the system. Exposure of AGS to nanoparticles promoted direct interspecies electron transfer among the anaerobic fermenting bacteria and their counterpart methanogens during the anaerobic digestion process. This results provide useful insightful information in understanding the response of microorganisms to IONPs and MWCNTs in the complex natural environment.

Keywords: anaerobic granular sludge, extracellular polymeric substances, iron oxide nanoparticles, multi-wall carbon nanotubes

Procedia PDF Downloads 293
661 Rheological Study of Natural Sediments: Application in Filling of Estuaries

Authors: S. Serhal, Y. Melinge, D. Rangeard, F. Hage Chehadeh

Abstract:

Filling of estuaries is an international problem that can cause economic and environmental damage. This work aims the study of the rheological structuring mechanisms of natural sedimentary liquid-solid mixture in estuaries in order to better understand their filling. The estuary of the Rance river, located in Brittany, France is particularly targeted by the study. The aim is to provide answers on the rheological behavior of natural sediments by detecting structural factors influencing the rheological parameters. So we can better understand the fillings estuarine areas and especially consider sustainable solutions of ‘cleansing’ of these areas. The sediments were collected from the trap of Lyvet in Rance estuary. This trap was created by the association COEUR (Comité Opérationnel des Elus et Usagers de la Rance) in 1996 in order to facilitate the cleansing of the estuary. It creates a privileged area for the deposition of sediments and consequently makes the cleansing of the estuary easier. We began our work with a preliminary study to establish the trend of the rheological behavior of the suspensions and to specify the dormant phase which precedes the beginning of the biochemical reactivity of the suspensions. Then we highlight the visco-plastic character at younger age using the Kinexus rheometer, plate-plate geometry. This rheological behavior of suspensions is represented by the Bingham model using dynamic yield stress and viscosity which can be a function of volume fraction, granular extent, and chemical reactivity. The evolution of the viscosity as a function of the solid volume fraction is modeled by the Krieger-Dougherty model. On the other hand, the analysis of the dynamic yield stress showed a fairly functional link with the solid volume fraction.

Keywords: estuaries, rheological behavior, sediments, Kinexus rheometer, Bingham model, viscosity, yield stress

Procedia PDF Downloads 160
660 Comparative Assessment of Geocell and Geogrid Reinforcement for Flexible Pavement: Numerical Parametric Study

Authors: Anjana R. Menon, Anjana Bhasi

Abstract:

Development of highways and railways play crucial role in a nation’s economic growth. While rigid concrete pavements are durable with high load bearing characteristics, growing economies mostly rely on flexible pavements which are easier in construction and more economical. The strength of flexible pavement is based on the strength of subgrade and load distribution characteristics of intermediate granular layers. In this scenario, to simultaneously meet economy and strength criteria, it is imperative to strengthen and stabilize the load transferring layers, namely subbase and base. Geosynthetic reinforcement in planar and cellular forms have been proven effective in improving soil stiffness and providing a stable load transfer platform. Studies have proven the relative superiority of cellular form-geocells over planar geosynthetic forms like geogrid, owing to the additional confinement of infill material and pocket effect arising from vertical deformation. Hence, the present study investigates the efficiency of geocells over single/multiple layer geogrid reinforcements by a series of three-dimensional model analyses of a flexible pavement section under a standard repetitive wheel load. The stress transfer mechanism and deformation profiles under various reinforcement configurations are also studied. Geocell reinforcement is observed to take up a higher proportion of stress caused by the traffic loads compared to single and double-layer geogrid reinforcements. The efficiency of single geogrid reinforcement reduces with an increase in embedment depth. The contribution of lower geogrid is insignificant in the case of the double-geogrid reinforced system.

Keywords: Geocell, Geogrid, Flexible Pavement, Repetitive Wheel Load, Numerical Analysis

Procedia PDF Downloads 75
659 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 166