Search results for: block linear multistep methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18436

Search results for: block linear multistep methods

14356 A Reliable Multi-Type Vehicle Classification System

Authors: Ghada S. Moussa

Abstract:

Vehicle classification is an important task in traffic surveillance and intelligent transportation systems. Classification of vehicle images is facing several problems such as: high intra-class vehicle variations, occlusion, shadow, illumination. These problems and others must be considered to develop a reliable vehicle classification system. In this study, a reliable multi-type vehicle classification system based on Bag-of-Words (BoW) paradigm is developed. Our proposed system used and compared four well-known classifiers; Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), k-Nearest Neighbour (KNN), and Decision Tree to classify vehicles into four categories: motorcycles, small, medium and large. Experiments on a large dataset show that our approach is efficient and reliable in classifying vehicles with accuracy of 95.7%. The SVM outperforms other classification algorithms in terms of both accuracy and robustness alongside considerable reduction in execution time. The innovativeness of developed system is it can serve as a framework for many vehicle classification systems.

Keywords: vehicle classification, bag-of-words technique, SVM classifier, LDA classifier, KNN classifier, decision tree classifier, SIFT algorithm

Procedia PDF Downloads 350
14355 Algerian Literature Written in English: A Comparative Analysis of Four Novels and Their Historical, Cultural, and Identity Themes

Authors: Wafa Nouari

Abstract:

This study compares four novels written in English by Algerian writers: Donkey Heart Monkey Mind by Djaffar Chetouane, Pebble in the River by Noufel Bouzeboudja, Sophia in the White City by Belkacem Mezghouchene, and The Inner Light of Darkness by Iheb Kharab. It applies comparative research methods and cultural studies as the literary theory to analyze how these novels depict Algeria’s culture, history, and identity through their genre, style, tone, perspective, and structure. It identifies some common themes shared by them, such as the quest for freedom and dignity in a context of oppression and colonialism and the use of storytelling, imagination, and creativity as coping mechanisms for trauma and adversity. It also highlights their differences in terms of style, genre, setting, period, and perspectives. It concludes that these novels offer rich and diverse insights into Algeria and its multifaceted reality. It also discusses some limitations and challenges related to Algerian literature in English and suggests some directions for future research.

Keywords: Algeri an literature in English, comparative research methods, cultural studies, diversity and complexity

Procedia PDF Downloads 127
14354 Air Dispersion Modeling for Prediction of Accidental Emission in the Atmosphere along Northern Coast of Egypt

Authors: Moustafa Osman

Abstract:

Modeling of air pollutants from the accidental release is performed for quantifying the impact of industrial facilities into the ambient air. The mathematical methods are requiring for the prediction of the accidental scenario in probability of failure-safe mode and analysis consequences to quantify the environmental damage upon human health. The initial statement of mitigation plan is supporting implementation during production and maintenance periods. In a number of mathematical methods, the flow rate at which gaseous and liquid pollutants might be accidentally released is determined from various types in term of point, line and area sources. These emissions are integrated meteorological conditions in simplified stability parameters to compare dispersion coefficients from non-continuous air pollution plumes. The differences are reflected in concentrations levels and greenhouse effect to transport the parcel load in both urban and rural areas. This research reveals that the elevation effect nearby buildings with other structure is higher 5 times more than open terrains. These results are agreed with Sutton suggestion for dispersion coefficients in different stability classes.

Keywords: air pollutants, dispersion modeling, GIS, health effect, urban planning

Procedia PDF Downloads 365
14353 Effect of Environmental Factors on Mosquito Larval Abundance in Some Selected Larval Sites in the Kintampo Area of Ghana

Authors: Yussif Tawfiq, Stephen Omari, Kwaku Poku Asante

Abstract:

The abundance of malaria vectors is influenced by micro-ecology, rainfall, and temperature patterns. The main objective of the study was to identify mosquito larval sites for future larval surveys and possible intervention programs. The study was conducted in Kintampo in central Ghana. Twenty larval sites were surveyed. Larval density was determined per cm² of water from each of the various sites. The dipper was used to fetch larvae from the larval sites, and a global positioning system (GPS) was used to identify larvae locations. There was a negative linear relationship between humidity, temperature, pH, and mosquito larval density. GPS of larval sites was taken for easy larval identification. There was the presence of Anopheles mosquito larvae in all polluted waters with Culex larval presence. This shows that Anopheles mosquito larvae are beginning to adapt to survival in polluted waters. The identified breeding sites are going to be useful for future larval surveys and will also help in intervention programs.

Keywords: larvae, GPS, dipper, larval density

Procedia PDF Downloads 79
14352 Effect of Supplementing Ziziphus Spina-Christi Leaf Meal to Natural Pasture Hay on Feed Intake, Body Weight Gain, Digestibility, and Carcass Characteristics of Tigray Highland Sheep

Authors: Abrha Reta, Ajebu Nurfeta, Genet Mengistu, Mohammed Beyan

Abstract:

Fodder trees such as Ziziphus spina-christi have the potential to enhance the utilization of natural grazing resources and also to mitigate seasonal feed shortages. The experiment was conducted with the objective of evaluating the effect of supplementing Ziziphus spina-christi leaf meal (ZSCLM) to natural pasture hay on feed intake, body weight gain, digestibility, and carcass characteristics of Tigray highland sheep. A randomized complete block design was employed with 5 blocks based on initial body weight, and sheep were randomly assigned to five treatments. Treatments were: 100g concentrate mix + ad libtum natural pasture hay (T1), T1+ 100g ZSCLM (T2), T1 + 200g ZSCLM (T3), T1 + 300g ZSCLM (T4), and T1 + 400g ZSCLM (T5) on dry matter (DM) basis. Dry matter intake was greater (P<0.05) in sheep on T5 compared to T3 and T1, while the total DM intake among T2, T4, and T5 were similar. Crude protein and metabolizable energy intake differed (P<0.05) among treatments with highest and lowest values in T5 and T1, respectively. Average daily gain was higher (P<0.05) in sheep kept on T2, T3, and T4 diets than T1. Higher (P<0.05) DM digestibility was found in T4 and T5 than T1. The highest (P<0.05) OM and CP digestibility was observed in sheep fed T3, T4, and T5 diets. Rib eye muscle area was higher (P<0.05) for T4 than T1 and T2. Dressing percentage was similar (P>0.05) among treatments. The current study indicated that supplementation of Tigray highland sheep with 200g air-dried Ziziphus spina-christi leaf meal leaves with 100g of concentrate mixture in their diet significantly increased feed intake and apparent digestibility, body weight gain, hot carcass weight, and rib eye muscle area by improving feed conversion efficiency.

Keywords: body weight, carcass, digestibility, and ziziphus spina-christi leaf meal

Procedia PDF Downloads 101
14351 Comparison of Solar Radiation Models

Authors: O. Behar, A. Khellaf, K. Mohammedi, S. Ait Kaci

Abstract:

Up to now, most validation studies have been based on the MBE and RMSE, and therefore, focused only on long and short terms performance to test and classify solar radiation models. This traditional analysis does not take into account the quality of modeling and linearity. In our analysis we have tested 22 solar radiation models that are capable to provide instantaneous direct and global radiation at any given location Worldwide. We introduce a new indicator, which we named Global Accuracy Indicator (GAI) to examine the linear relationship between the measured and predicted values and the quality of modeling in addition to long and short terms performance. Note that the quality of model has been represented by the T-Statistical test, the model linearity has been given by the correlation coefficient and the long and short term performance have been respectively known by the MBE and RMSE. An important founding of this research is that the use GAI allows avoiding default validation when using traditional methodology that might results in erroneous prediction of solar power conversion systems performances.

Keywords: solar radiation model, parametric model, performance analysis, Global Accuracy Indicator (GAI)

Procedia PDF Downloads 341
14350 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands

Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé

Abstract:

The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.

Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis

Procedia PDF Downloads 155
14349 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 144
14348 Magnetoviscous Effects on Axi-Symmetric Ferrofluid Flow over a Porous Rotating Disk with Suction/Injection

Authors: Vikas Kumar

Abstract:

The present study is carried out to investigate the magneto-viscous effects on incompressible ferrofluid flow over a porous rotating disc with suction or injection on the surface of the disc subjected to a magnetic field. The flow under consideration is axi-symmetric steady ferrofluid flow of electrically non-conducting fluid. Karman’s transformation is used to convert the governing boundary layer equations involved in the problem to a system of non linear coupled differential equations. The solution of this system is obtained by using power series approximation. The flow characteristics i.e. radial, tangential, axial velocities and boundary layer displacement thickness are calculated for various values of MFD (magnetic field dependent) viscosity and for different values of suction injection parameter. Besides this, skin friction coefficients are also calculated on the surface of the disk. Thus, the obtained results are presented numerically and graphically in the paper.

Keywords: axi-symmetric, ferrofluid, magnetic field, porous rotating disk

Procedia PDF Downloads 386
14347 Assessing Relationships between Glandularity and Gray Level by Using Breast Phantoms

Authors: Yun-Xuan Tang, Pei-Yuan Liu, Kun-Mu Lu, Min-Tsung Tseng, Liang-Kuang Chen, Yuh-Feng Tsai, Ching-Wen Lee, Jay Wu

Abstract:

Breast cancer is predominant of malignant tumors in females. The increase in the glandular density increases the risk of breast cancer. BI-RADS is a frequently used density indicator in mammography; however, it significantly overestimates the glandularity. Therefore, it is very important to accurately and quantitatively assess the glandularity by mammography. In this study, 20%, 30% and 50% glandularity phantoms were exposed using a mammography machine at 28, 30 and 31 kVp, and 30, 55, 80 and 105 mAs, respectively. The regions of interest (ROIs) were drawn to assess the gray level. The relationship between the glandularity and gray level under various compression thicknesses, kVp, and mAs was established by the multivariable linear regression. A phantom verification was performed with automatic exposure control (AEC). The regression equation was obtained with an R-square value of 0.928. The average gray levels of the verification phantom were 8708, 8660 and 8434 for 0.952, 0.963 and 0.985 g/cm3, respectively. The percent differences of glandularity to the regression equation were 3.24%, 2.75% and 13.7%. We concluded that the proposed method could be clinically applied in mammography to improve the glandularity estimation and further increase the importance of breast cancer screening.

Keywords: mammography, glandularity, gray value, BI-RADS

Procedia PDF Downloads 481
14346 Vertical Urban Design Guideline and Its Application to Measure Human Cognition and Emotions

Authors: Hee Sun (Sunny) Choi, Gerhard Bruyns, Wang Zhang, Sky Cheng, Saijal Sharma

Abstract:

This research addresses the need for a comprehensive framework that can guide the design and assessment of multi-level public spaces and public realms and their impact on the built environment. The study aims to understand and measure the neural mechanisms involved in this process. By doing so, it can lay the foundation for vertical and volumetric urbanism and ensure consistency and excellence in the field while also supporting scientific research methods for urban design with cognitive neuroscientists. To investigate these aspects, the paper focuses on the neighborhood scale in Hong Kong, specifically examining multi-level public spaces and quasi-public spaces within both commercial and residential complexes. The researchers use predictive Artificial Intelligence (AI) as a methodology to assess and comprehend the applicability of the urban design framework for vertical and volumetric urbanism. The findings aim to identify the factors that contribute to successful public spaces within a vertical living environment, thus introducing a new typology of public spaces.

Keywords: vertical urbanism, scientific research methods, spatial cognition, urban design guideline

Procedia PDF Downloads 67
14345 The Stock Price Effect of Apple Keynotes

Authors: Ethan Petersen

Abstract:

In this paper, we analyze the volatility of Apple’s stock beginning January 3, 2005 up to October 9, 2014, then focus on a range from 30 days prior to each product announcement until 30 days after. Product announcements are filtered; announcements whose 60 day range is devoid of other events are separated. This filtration is chosen to isolate, and study, a potential cross-effect. Concerning Apple keynotes, there are two significant dates: the day the invitations to the event are received and the day of the event itself. As such, the statistical analysis is conducted for both invite-centered and event-centered time frames. A comparison to the VIX is made to determine if the trend is simply following the market or deviating. Regardless of the filtration, we find that there is a clear deviation from the market. Comparing these data sets, there are significantly different trends: isolated events have a constantly decreasing, erratic trend in volatility but an increasing, linear trend is observed for clustered events. According to the Efficient Market Hypothesis, we would expect a change when new information is publicly known and the results of this study support this claim.

Keywords: efficient market hypothesis, event study, volatility, VIX

Procedia PDF Downloads 274
14344 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications

Authors: Marina Shargorodsky

Abstract:

Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.

Keywords: obesity, pregnancy, complications, weight gain

Procedia PDF Downloads 48
14343 Avian and Rodent Pest Infestations of Lowland Rice (Oryza sativa L.) and Evaluation of Attributable Losses in Savanna Transition Environment

Authors: Okwara O. S., Osunsina I. O. O., Pitan O. R., Afolabi C. G.

Abstract:

Rice (Oryza sativa L.) belongs to the family poaceae and has become the most popular food. Globally, this crop is been faced with the menace of vertebrate pests, of which birds and rodents are the most implicated. The study avian and rodents’ infestations and the evaluation of attributable losses was carried out in 2020 and 2021 with the objectives of identifying the types of bird and rodent species associated with lowland rice and to determine the infestation levels, damage intensity, and the crop loss induced by these pests. The experiment was laid out in a split plot arrangement fitted into a Randomized Complete Block Design (RCBD), with the main plots being protected and unprotected groups and the sub-plots being four rice varieties, Ofada, WITA-4, NERICA L-34, and Arica-3. Data collection was done over a 16-week period, and the data obtained were transformed using square root transformation model before Analysis of Variance (ANOVA) was done at 5% probability level. The results showed the infestation levels of both birds and rodents across all the treatment means of thevarieties as not significantly different (p > 0.05) in both seasons. The damage intensity by these pests in both years were also not significantly different (p > 0.05) among the means of the varieties, which explains the diverse feeding nature of birds and rodents when it comes to infestations. The infestation level under the protected group was significantly lower (p < 0.05) than the infestation level recorded under the unprotected group.Consequently, an estimated crop loss of 91.94 % and 90.75 % were recorded in 2020 and 2021, respectively, andthe identified pest birds were Ploceus melanocephalus, Ploceus cuculatus, and Spermestes cucullatus. Conclusively, vertebrates pest cause damage to lowland rice which could result to a high percentage crop loss if left uncontrolled.

Keywords: pests, infestations, evaluation, losses, rodents, avian

Procedia PDF Downloads 117
14342 Functional Neural Network for Decision Processing: A Racing Network of Programmable Neurons Where the Operating Model Is the Network Itself

Authors: Frederic Jumelle, Kelvin So, Didan Deng

Abstract:

In this paper, we are introducing a model of artificial general intelligence (AGI), the functional neural network (FNN), for modeling human decision-making processes. The FNN is composed of multiple artificial mirror neurons (AMN) racing in the network. Each AMN has a similar structure programmed independently by the users and composed of an intention wheel, a motor core, and a sensory core racing at a specific velocity. The mathematics of the node’s formulation and the racing mechanism of multiple nodes in the network will be discussed, and the group decision process with fuzzy logic and the transformation of these conceptual methods into practical methods of simulation and in operations will be developed. Eventually, we will describe some possible future research directions in the fields of finance, education, and medicine, including the opportunity to design an intelligent learning agent with application in AGI. We believe that FNN has a promising potential to transform the way we can compute decision-making and lead to a new generation of AI chips for seamless human-machine interactions (HMI).

Keywords: neural computing, human machine interation, artificial general intelligence, decision processing

Procedia PDF Downloads 120
14341 Green Crypto Mining: A Quantitative Analysis of the Profitability of Bitcoin Mining Using Excess Wind Energy

Authors: John Dorrell, Matthew Ambrosia, Abilash

Abstract:

This paper employs econometric analysis to quantify the potential profit wind farms can receive by allocating excess wind energy to power bitcoin mining machines. Cryptocurrency mining consumes a substantial amount of electricity worldwide, and wind energy produces a significant amount of energy that is lost because of the intermittent nature of the resource. Supply does not always match consumer demand. By combining the weaknesses of these two technologies, we can improve efficiency and a sustainable path to mine cryptocurrencies. This paper uses historical wind energy from the ERCOT network in Texas and cryptocurrency data from 2000-2021, to create 4-year return on investment projections. Our research model incorporates the price of bitcoin, the price of the miner, the hash rate of the miner relative to the network hash rate, the block reward, the bitcoin transaction fees awarded to the miners, the mining pool fees, the cost of the electricity and the percentage of time the miner will be running to demonstrate that wind farms generate enough excess energy to mine bitcoin profitably. Excess wind energy can be used as a financial battery, which can utilize wasted electricity by changing it into economic energy. The findings of our research determine that wind energy producers can earn profit while not taking away much if any, electricity from the grid. According to our results, Bitcoin mining could give as much as 1347% and 805% return on investment with the starting dates of November 1, 2021, and November 1, 2022, respectively, using wind farm curtailment. This paper is helpful to policymakers and investors in determining efficient and sustainable ways to power our economic future. This paper proposes a practical solution for the problem of crypto mining energy consumption and creates a more sustainable energy future for Bitcoin.

Keywords: bitcoin, mining, economics, energy

Procedia PDF Downloads 25
14340 Integrating Dependent Material Planning Cycle into Building Information Management: A Building Information Management-Based Material Management Automation Framework

Authors: Faris Elghaish, Sepehr Abrishami, Mark Gaterell, Richard Wise

Abstract:

The collaboration and integration between all building information management (BIM) processes and tasks are necessary to ensure that all project objectives can be delivered. The literature review has been used to explore the state of the art BIM technologies to manage construction materials as well as the challenges which have faced the construction process using traditional methods. Thus, this paper aims to articulate a framework to integrate traditional material planning methods such as ABC analysis theory (Pareto principle) to analyse and categorise the project materials, as well as using independent material planning methods such as Economic Order Quantity (EOQ) and Fixed Order Point (FOP) into the BIM 4D, and 5D capabilities in order to articulate a dependent material planning cycle into BIM, which relies on the constructability method. Moreover, we build a model to connect between the material planning outputs and the BIM 4D and 5D data to ensure that all project information will be accurately presented throughout integrated and complementary BIM reporting formats. Furthermore, this paper will present a method to integrate between the risk management output and the material management process to ensure that all critical materials are monitored and managed under the all project stages. The paper includes browsers which are proposed to be embedded in any 4D BIM platform in order to predict the EOQ as well as FOP and alarm the user during the construction stage. This enables the planner to check the status of the materials on the site as well as to get alarm when the new order will be requested. Therefore, this will lead to manage all the project information in a single context and avoid missing any information at early design stage. Subsequently, the planner will be capable of building a more reliable 4D schedule by allocating the categorised material with the required EOQ to check the optimum locations for inventory and the temporary construction facilitates.

Keywords: building information management, BIM, economic order quantity, EOQ, fixed order point, FOP, BIM 4D, BIM 5D

Procedia PDF Downloads 166
14339 Video-On-Demand QoE Evaluation across Different Age-Groups and Its Significance for Network Capacity

Authors: Mujtaba Roshan, John A. Schormans

Abstract:

Quality of Experience (QoE) drives churn in the broadband networks industry, and good QoE plays a large part in the retention of customers. QoE is known to be affected by the Quality of Service (QoS) factors packet loss probability (PLP), delay and delay jitter caused by the network. Earlier results have shown that the relationship between these QoS factors and QoE is non-linear, and may vary from application to application. We use the network emulator Netem as the basis for experimentation, and evaluate how QoE varies as we change the emulated QoS metrics. Focusing on Video-on-Demand, we discovered that the reported QoE may differ widely for users of different age groups, and that the most demanding age group (the youngest) can require an order of magnitude lower PLP to achieve the same QoE than is required by the most widely studied age group of users. We then used a bottleneck TCP model to evaluate the capacity cost of achieving an order of magnitude decrease in PLP, and found it be (almost always) a 3-fold increase in link capacity that was required.

Keywords: network capacity, packet loss probability, quality of experience, quality of service

Procedia PDF Downloads 268
14338 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search

Procedia PDF Downloads 411
14337 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 55
14336 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 270
14335 Combining Ability for Maize Grain Yield and Yield Component for Resistant to Striga hermmonthica (Del) Benth in Southern Guinea Savannah of Nigeria

Authors: Terkimbi Vange, Obed Abimiku, Lateef Lekan Bello, Lucky Omoigui

Abstract:

In 2014 and 2015, eight maize inbred lines resistant to Striga hermonthica (Del) Benth were crossed in 8 x 8 half diallel (Griffing method 11, model 1). The eight parent inbred lines were planted out in a Randomized Complete Block Design (RCBD) with three replications at two different Striga infested environments (Lafia and Makurdi) during the late cropping season. The objectives were to determine the combining ability of Striga resistant maize inbred lines and identify suitable inbreds for hybrids development. The lines were used to estimate general combining ability (GCA), and specific combining ability (SCA) effects for Striga related parameters such as Striga shoot counts, Striga damage rating (SDR), plant height and grain yield and other agronomic traits. The result of combined ANOVA revealed that mean squares were highly significant for all traits except Striga damage rating (SDR1) at 8WAS and Striga emergence count (STECOI) at 8WAS. Mean squares for SCA were significantly low for all traits. TZSTR190 was the highest yielding parent, and TZSTR166xTZST190 was the highest yielding hybrid (cross). Parent TZSTR166, TZEI188, TZSTR190 and TZSTR193 shows significant (p < 0.05) positive GCA effects for grain yield while the rest had negative GCA effects for grain yield. Parent TZSTR166, TZEI188, TZSTR190, and TZSTR193 could be used for initiating hybrid development. Also, TZSTR166xTZSTR190 cross was the best specific combiner followed by TZEI188xTZSTR193, TZEI80xTZSTR193, and TZSTR190xTZSTR193. TZSTR166xTZSTR190 and TZSTR190xTZSTR193 had the highest SCA effects. However, TZEI80 and TZSTR190 manifested a high positive SCA effect with TZSTR166 indicating that these two inbreds combined better with TZSTR166.

Keywords: combining ability, Striga hermonthica, resistance, grain yield

Procedia PDF Downloads 233
14334 Polymer Impregnated Sulfonated Carbon Composite as a Solid Acid Catalyst for the Dehydration of Xylose to Furfural

Authors: Praveen K. Khatri, Neha Karanwal, Savita Kaul, Suman L. Jain

Abstract:

Conversion of biomass through green chemical routes is of great industrial importance as biomass is considered to be most widely available inexpensive renewable resource that can be used as a raw material for the production of bio fuel and value-added organic products. In this regard, acid catalyzed dehydration of biomass derived pentose sugar (mainly D-xylose) to furfural is a process of tremendous research interest in current scenario due to the wider industrial applications of furfural. Furfural is an excellent organic solvent for refinement of lubricants and separation of butadiene from butene mixture in synthetic rubber fabrication. In addition it also serve as a promising solvent for many organic materials, such as resins, polymers and also used as a building block for synthesis of various valuable chemicals such as furfuryl alcohol, furan, pharmaceutical, agrochemicals and THF. Here in a sulfonated polymer impregnated carbon composite solid acid catalyst (P-C-SO3H) was prepared by the pyrolysis of a polymer matrix impregnated with glucose followed by its sulfonation and used for the dehydration of xylose to furfural. The developed catalyst exhibited excellent activity and provided almost quantitative conversion of xylose with the selective synthesis of furfural. The higher catalytic activity of P-C-SO3H may be due to the more even distribution of polycyclic aromatic hydrocarbons generated from incomplete carbonization of glucose along the polymer matrix network, leading to more available sites for sulfonation which resulted in greater sulfonic acid density in P-C-SO3H as compared to sulfonated carbon catalyst (C-SO3H). In conclusion, we have demonstrated sulfonated polymer impregnated carbon composite (P-C-SO3H) as an efficient and selective solid acid catalyst for the dehydration of xylose to furfural. After completion of the reaction, the catalyst was easily recovered and reused for several runs without noticeable loss in its activity and selectivity.

Keywords: Solid acid , Biomass conversion, Xylose Dehydration, Heterogeneous catalyst

Procedia PDF Downloads 402
14333 Risk Management in Islamic Banks: A Case Study of the Faisal Islamic Bank of Egypt

Authors: Mohamed Saad Ahmed Hussien

Abstract:

This paper discusses the risk management in Islamic banks and aims to determine the difference in the practices and methods of risk management in those banks compared to the conventional banks, and to make a case study of the biggest Islamic bank in Egypt (Faisal Islamic Bank of Egypt) to identify the most important financial risks faced and how to manage those risks. It was found that Islamic banks face two types of risks. The first type is similar to the risks in conventional banks; the second type is the additional risks which facing the Islamic banks only as a result of some Islamic modes of financing. With regard to the risk management, Islamic banks such as conventional banks applied the regulatory rules issued by the Central Banks and the Basel Committee; Islamic banks also applied the instructions and procedures issued by the Islamic Financial Services Board (IFSB). Also, Islamic banks are similar to the conventional banks in the practices and methods which they use to manage the risks. And there are some factors that may affect the risk management in Islamic banks, such as the size of the bank and the efficiency of the administration and the staff of the bank.

Keywords: conventional banks, Faisal Islamic Bank of Egypt, Islamic banks, risk management

Procedia PDF Downloads 452
14332 Research of Amplitude-Frequency Characteristics of Nonlinear Oscillations of the Interface of Two-Layered Liquid

Authors: Win Ko Ko, A. N. Temnov

Abstract:

The problem of nonlinear oscillations of a two-layer liquid completely filling a limited volume is considered. Using two basic asymmetric harmonics excited in two mutually perpendicular planes, ordinary differential equations of nonlinear oscillations of the interface of a two-layer liquid are investigated. In this paper, hydrodynamic coefficients of linear and nonlinear problems in integral relations were determined. As a result, the instability regions of forced oscillations of a two-layered liquid in a cylindrical tank occurring in the plane of action of the disturbing force are constructed, as well as the dynamic instability regions of the parametric resonance for different ratios of densities of the upper and lower liquids depending on the amplitudes of liquids from the excitations frequencies. Steady-state regimes of fluid motion were found in the regions of dynamic instability of the initial oscillation form. The Bubnov-Galerkin method is used to construct instability regions for approximate solution of nonlinear differential equations.

Keywords: nonlinear oscillations, two-layered liquid, instability region, hydrodynamic coefficients, resonance frequency

Procedia PDF Downloads 208
14331 Temperature-Dependent Structural Characterization of Type-II Dirac Semi-Metal nite₂ From Bulk to Exfoliated Thin Flakes Using Raman Spectroscopy

Authors: Minna Theres James, Nirmal K Sebastian, Shoubhik Mandal, Pramita Mishra, R Ganesan, P S Anil Kumar

Abstract:

We report the temperature-dependent evolution of Raman spectra of type-II Dirac semimetal (DSM) NiTe2 (001) in the form of bulk single crystal and a nanoflake (200 nm thick) for the first time. A physical model that can quantitatively explain the evolution of out of plane A1g and in-plane E1g Raman modes is used. The non-linear variation of peak positions of the Raman modes with temperature is explained by anharmonic three-phonon and four-phonon processes along with thermal expansion of the lattice. We also observe prominent effect of electron-phonon coupling from the variation of FWHM of the peaks with temperature, indicating the metallicity of the samples. Raman mode E1 1g corresponding to an in plane vibration disappears on decreasing the thickness from bulk to nanoflake.

Keywords: raman spectroscopy, type 2 dirac semimetal, nickel telluride, phonon-phonon coupling, electron phonon coupling, transition metal dichalcogonide

Procedia PDF Downloads 106
14330 Scoping Review of Biological Age Measurement Composed of Biomarkers

Authors: Diego Alejandro Espíndola-Fernández, Ana María Posada-Cano, Dagnóvar Aristizábal-Ocampo, Jaime Alberto Gallo-Villegas

Abstract:

Background: With the increase in life expectancy, aging has been subject of frequent research, and therefore multiple strategies have been proposed to quantify the advance of the years based on the known physiology of human senescence. For several decades, attempts have been made to characterize these changes through the concept of biological age, which aims to integrate, in a measure of time, structural or functional variation through biomarkers in comparison with simple chronological age. The objective of this scoping review is to deepen the updated concept of measuring biological age composed of biomarkers in the general population and to summarize recent evidence to identify gaps and priorities for future research. Methods: A scoping review was conducted according to the five-phase methodology developed by Arksey and O'Malley through a search of five bibliographic databases to February 2021. Original articles were included with no time or language limit that described the biological age composed of at least two biomarkers in those over 18 years of age. Results: 674 articles were identified, of which 105 were evaluated for eligibility and 65 were included with information on the measurement of biological age composed of biomarkers. Articles from 1974 of 15 nationalities were found, most observational studies, in which clinical or paraclinical biomarkers were used, and 11 different methods described for the calculation of the composite biological age were informed. The outcomes reported were the relationship with the same measured biomarkers, specified risk factors, comorbidities, physical or cognitive functionality, and mortality. Conclusions: The concept of biological age composed of biomarkers has evolved since the 1970s and multiple methods of its quantification have been described through the combination of different clinical and paraclinical variables from observational studies. Future research should consider the population characteristics, and the choice of biomarkers against the proposed outcomes to improve the understanding of aging variables to direct effective strategies for a proper approach.

Keywords: biological age, biological aging, aging, senescence, biomarker

Procedia PDF Downloads 180
14329 Regulation, Evaluation and Incentives: An Analysis of Management Characteristics of Nonprofit Organizations in China

Authors: Wuqi Yang, Sufeng Li, Linda Zhai, Zhizhong Yuan, Shengli Wang

Abstract:

How to assess and evaluate a not-for-profit (NFP) organisation’s performance should be of concern to all stakeholders because, amongst other things, without correctly evaluating its performance might affect an NFP being not able to continue to meet its service objectives. Given the growing importance of this sector in China, more and more existing and potential donors, governments and others are starting to take an increased interest in the financial conditions and performance of NFPs. However, when these various groups look for ways (methods) to assess the performance of NFPs, they find there has been relatively little research conducted into methods for assessing the performance of NFPs in China. Furthermore, there does not appear to have been any research to date into the performance evaluation of Chinese NFPs. The focus of this paper is to investigate how the Chinese government manages and evaluates not-for-profit (NFP) organisations' performances in China. Through examining and evaluating the NFPs in China from different aspects such as business development, mission fulfillment, financial position and other status, this paper finds some institutional constraints currently facing by the NFPs in China. At the end of this paper, a new regulatory framework is proposed for regulators’ considerations. The research methods are based on a combination of a literature review; using Balanced Scorecard to assess NFPs in China; Case Study method is employed to analyse a charity foundation’s performance in Hebei Province and proposing solutions to resolve the current issues and challenges facing by the NFPs. These solutions include: formulating laws and regulations on NFPs; simplifying management procedures, introducing tax incentives, providing financial support and other incentives to support the development of non-profit organizations in China. This study provides the first step towards a greater understanding of the NFP performance evaluation in China. It is expected that the findings and solutions from this study will be useful to anyone involved with the China NFP sector; particularly CEOs, managers, bankers, independent auditors and government agencies.

Keywords: Chinese non-profit organizations, evaluation, management, supervision

Procedia PDF Downloads 171
14328 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 110
14327 Carbon-Based Electrochemical Detection of Pharmaceuticals from Water

Authors: M. Ardelean, F. Manea, A. Pop, J. Schoonman

Abstract:

The presence of pharmaceuticals in the environment and especially in water has gained increasing attention. They are included in emerging class of pollutants, and for most of them, legal limits have not been set-up due to their impact on human health and ecosystem was not determined and/or there is not the advanced analytical method for their quantification. In this context, the development of various advanced analytical methods for the quantification of pharmaceuticals in water is required. The electrochemical methods are known to exhibit the great potential for high-performance analytical methods but their performance is in direct relation to the electrode material and the operating techniques. In this study, two types of carbon-based electrodes materials, i.e., boron-doped diamond (BDD) and carbon nanofiber (CNF)-epoxy composite electrodes have been investigated through voltammetric techniques for the detection of naproxen in water. The comparative electrochemical behavior of naproxen (NPX) on both BDD and CNF electrodes was studied by cyclic voltammetry, and the well-defined peak corresponding to NPX oxidation was found for each electrode. NPX oxidation occurred on BDD electrode at the potential value of about +1.4 V/SCE (saturated calomel electrode) and at about +1.2 V/SCE for CNF electrode. The sensitivities for NPX detection were similar for both carbon-based electrode and thus, CNF electrode exhibited superiority in relation to the detection potential. Differential-pulsed voltammetry (DPV) and square-wave voltammetry (SWV) techniques were exploited to improve the electroanalytical performance for the NPX detection, and the best results related to the sensitivity of 9.959 µA·µM-1 were achieved using DPV. In addition, the simultaneous detection of NPX and fluoxetine -a very common antidepressive drug, also present in water, was studied using CNF electrode and very good results were obtained. The detection potential values that allowed a good separation of the detection signals together with the good sensitivities were appropriate for the simultaneous detection of both tested pharmaceuticals. These results reclaim CNF electrode as a valuable tool for the individual/simultaneous detection of pharmaceuticals in water.

Keywords: boron-doped diamond electrode, carbon nanofiber-epoxy composite electrode, emerging pollutans, pharmaceuticals

Procedia PDF Downloads 277