Search results for: computational accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5458

Search results for: computational accuracy

388 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 121
387 A Prospective Study of a Clinically Significant Anatomical Change in Head and Neck Intensity-Modulated Radiation Therapy Using Transit Electronic Portal Imaging Device Images

Authors: Wilai Masanga, Chirapha Tannanonta, Sangutid Thongsawad, Sasikarn Chamchod, Todsaporn Fuangrod

Abstract:

The major factors of radiotherapy for head and neck (HN) cancers include patient’s anatomical changes and tumour shrinkage. These changes can significantly affect the planned dose distribution that causes the treatment plan deterioration. A measured transit EPID images compared to a predicted EPID images using gamma analysis has been clinically implemented to verify the dose accuracy as part of adaptive radiotherapy protocol. However, a global gamma analysis dose not sensitive to some critical organ changes as the entire treatment field is compared. The objective of this feasibility study is to evaluate the dosimetric response to patient anatomical changes during the treatment course in HN IMRT (Head and Neck Intensity-Modulated Radiation Therapy) using a novel comparison method; organ-of-interest gamma analysis. This method provides more sensitive to specific organ change detection. Random replanned 5 HN IMRT patients with causes of tumour shrinkage and patient weight loss that critically affect to the parotid size changes were selected and evaluated its transit dosimetry. A comprehensive physics-based model was used to generate a series of predicted transit EPID images for each gantry angle from original computed tomography (CT) and replan CT datasets. The patient structures; including left and right parotid, spinal cord, and planning target volume (PTV56) were projected to EPID level. The agreement between the transit images generated from original CT and replanned CT was quantified using gamma analysis with 3%, 3mm criteria. Moreover, only gamma pass-rate is calculated within each projected structure. The gamma pass-rate in right parotid and PTV56 between predicted transit of original CT and replan CT were 42.8%( ± 17.2%) and 54.7%( ± 21.5%). The gamma pass-rate for other projected organs were greater than 80%. Additionally, the results of organ-of-interest gamma analysis were compared with 3-dimensional cone-beam computed tomography (3D-CBCT) and the rational of replan by radiation oncologists. It showed that using only registration of 3D-CBCT to original CT does not provide the dosimetric impact of anatomical changes. Using transit EPID images with organ-of-interest gamma analysis can provide additional information for treatment plan suitability assessment.

Keywords: re-plan, anatomical change, transit electronic portal imaging device, EPID, head, and neck

Procedia PDF Downloads 216
386 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 72
385 Space Tourism Pricing Model Revolution from Time Independent Model to Time-Space Model

Authors: Kang Lin Peng

Abstract:

Space tourism emerged in 2001 and became famous in 2021, following the development of space technology. The space market is twisted because of the excess demand. Space tourism is currently rare and extremely expensive, with biased luxury product pricing, which is the seller’s market that consumers can not bargain with. Spaceship companies such as Virgin Galactic, Blue Origin, and Space X have been charged space tourism prices from 200 thousand to 55 million depending on various heights in space. There should be a reasonable price based on a fair basis. This study aims to derive a spacetime pricing model, which is different from the general pricing model on the earth’s surface. We apply general relativity theory to deduct the mathematical formula for the space tourism pricing model, which covers the traditional time-independent model. In the future, the price of space travel will be different from current flight travel when space travel is measured in lightyear units. The pricing of general commodities mainly considers the general equilibrium of supply and demand. The pricing model considers risks and returns with the dependent time variable as acceptable when commodities are on the earth’s surface, called flat spacetime. Current economic theories based on the independent time scale in the flat spacetime do not consider the curvature of spacetime. Current flight services flying the height of 6, 12, and 19 kilometers are charging with a pricing model that measures time coordinate independently. However, the emergence of space tourism is flying heights above 100 to 550 kilometers that have enlarged the spacetime curvature, which means tourists will escape from a zero curvature on the earth’s surface to the large curvature of space. Different spacetime spans should be considered in the pricing model of space travel to echo general relativity theory. Intuitively, this spacetime commodity needs to consider changing the spacetime curvature from the earth to space. We can assume the value of each spacetime curvature unit corresponding to the gradient change of each Ricci or energy-momentum tensor. Then we know how much to spend by integrating the spacetime from the earth to space. The concept is adding a price p component corresponding to the general relativity theory. The space travel pricing model degenerates into a time-independent model, which becomes a model of traditional commodity pricing. The contribution is that the deriving of the space tourism pricing model will be a breakthrough in philosophical and practical issues for space travel. The results of the space tourism pricing model extend the traditional time-independent flat spacetime mode. The pricing model embedded spacetime as the general relativity theory can better reflect the rationality and accuracy of space travel on the universal scale. The universal scale from independent-time scale to spacetime scale will bring a brand-new pricing concept for space traveling commodities. Fair and efficient spacetime economics will also bring to humans’ travel when we can travel in lightyear units in the future.

Keywords: space tourism, spacetime pricing model, general relativity theory, spacetime curvature

Procedia PDF Downloads 128
384 A Study of Kinematical Parameters I9N Instep Kicking in Soccer

Authors: Abdolrasoul Daneshjoo

Abstract:

Introduction: Soccer is a game which draws more attention in different countries especially in Brazil. Kicking among different skills in soccer and soccer players is an excellent role for the success and preference of a team. The way of point gaining in this game is passing the ball over the goal lines which are gained by shoot skill in attack time and or during the penalty kicks.Regarding the above assumption, identifying the effective factors in instep kicking in different distances shoot with maximum force and high accuracy or pass and penalty kick, may assist the coaches and players in raising qualitative level of performing the skill. Purpose: The aim of the present study was to study of a few kinematical parameters in instep kicking from 3 and 5 meter distance among the male and female elite soccer players. Methods: 24 right dominant lower limb subjects (12 males and 12 females) among Tehran elite soccer players with average and the standard deviation (22.5 ± 1.5) & (22.08± 1.31) years, height of (179.5 ± 5.81) & (164.3 ± 4.09) cm, weight of (69.66 ± 4.09) & (53.16 ± 3.51) kg, %BMI (21.06 ± .731) & (19.67 ± .709), having playing history of (4 ± .73) & (3.08 ± .66) years respectively participated in this study. They had at least two years of continuous playing experience in Tehran soccer league.For sampling player's kick; Kinemetrix Motion analysis with three cameras with 500 Hz was used. Five reflective markers were placed laterally on the kicking leg over anatomical points (the iliac crest, major trochanter, lateral epicondyle of femur, lateral malleolus, and lateral aspect of distal head of the fifth metatarsus). Instep kick was filmed, with one step approach and 30 to 45 degrees angle from stationary ball. Three kicks were filmed, one kick selected for further analyses. Using Kinemetrix 3D motion analysis software, the position of the markers was analyzed. Descriptive statistics were used to describe the mean and standard deviation, while the analysis of variance, and independent t-test (P < 0.05) were used to compare the kinematic parameters between two genders. Results and Discussion: Among the evaluated parameters, the knee acceleration, the thigh angular velocity, the angle of knee proportionately showed significant relationship with consequence of kick. While company performance on 5m in 2 genders, significant differences were observed in internal – external displacement of toe, ankle, hip and the velocity of toe, ankle and the acceleration of toe and the angular velocity of pelvic, thigh and before time contact. Significant differences showed the internal – external displacement of toe, the ankle, the knee and the hip, the iliac crest and the velocity of toe, the ankle and acceleration of ankle and angular velocity of the pelvic and the knee.

Keywords: biomechanics, kinematics, soccer, instep kick, male, female

Procedia PDF Downloads 415
383 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 315
382 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 122
381 Preliminary Analysis on the Distribution of Elements in Cannabis

Authors: E. Zafeiraki, P. Nisianakis, K. Machera

Abstract:

Cannabis plant contains 113 cannabinoids and it is commonly known for its psychoactive substance tetrahydrocannabinol or as a source of narcotic substances. The recent years’ cannabis cultivation also increases due to its wide use both for medical and industrial purposes as well as for uses as para-pharmaceuticals, cosmetics and food commodities. Depending on the final product, different parts of the plant are utilized, with the leaves and bud (seeds) being the most frequently used. Cannabis can accumulate various contaminants, including heavy metals, both from the soil and the water in which the plant grows. More specifically, metals may occur naturally in the soil and water, or they can enter into the environment through fertilizers, pesticides and fungicides that are commonly applied to crops. The high probability of metals accumulation in cannabis, combined with the latter growing use, raise concerns about the potential health effects in humans and consequently lead to the need for the implementation of safety measures for cannabis products, such as guidelines for regulating contaminants, including metals, and especially the ones characterized by high toxicity in cannabis. Acknowledging the above, the aim of the current study was first to investigate metals contamination in cannabis samples collected from Greece, and secondly to examine potential differences in metals accumulation among the different parts of the plant. To our best knowledge, this is the first study presenting information on elements in cannabis cultivated in Greece, and also on the distribution pattern of the former in the plant body. To this end, the leaves and the seeds of all the samples were initially separated and dried and then digested with Nitric acid (HNO₃) and Hydrochloric acid (HCl). For the analysis of these samples, an Inductive Coupled Plasma-Mass Spectrometry (ICP-MS) method was developed, able to quantify 28 elements. Internal standards were added at a constant rate and concentration to all calibration standards and unknown samples, while two certified reference materials were analyzed in every batch to ensure the accuracy of the measurements. The repeatability of the method and the background contamination were controlled by the analysis of quality control (QC) standards and blank samples in every sequence, respectively. According to the results, essential metals, such as Ca, Zn and Mg, were detected at high levels. On the contrary, the concentration of high toxicity metals, like As (average: 0.10ppm), Pb (average: 0.36ppm), Cd (average: 0.04ppm), and Hg (average: 0.012ppm) were very low in all the samples, indicating that no harmful effects on human health can be caused by the analyzed samples. Moreover, it appears that the pattern of contamination of metals is very similar in all the analyzed samples, which could be attributed to the same origin of the analyzed cannabis, i.e., the common soil composition, use of fertilizers, pesticides, etc. Finally, as far as the distribution pattern between the different parts of the plant is concerned, it was revealed that leaves present a higher concentration in comparison to seeds for all metals examined.

Keywords: cannabis, heavy metals, ICP-MS, leaves and seeds, elements

Procedia PDF Downloads 99
380 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 152
379 Application of the Sufficiency Economy Philosophy to Integrated Instructional Model of In-Service Teachers of Schools under the Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn, Nakhonnayok Educational Service Area Office

Authors: Kathaleeya Chanda

Abstract:

The schools under the Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn in Nakhonnayok Educational Service Area Office are the small schools, situated in a remote and undeveloped area.Thus, the school-age youth didn’t have or have fewer opportunities to study at the higher education level which can lead to many social and economic problems. This study aims to solve these educational issues of the schools, under The Project Initiated by H.R.H Princess in Maha Chakri Sirindhorn, Nakhonnayok Educational Service Area Office, by the development of teachers, so that teachers could develop teaching and learning system with the ultimate goal to increase students’ academic achievement, increase the educational opportunities for the youth in the area, and help them learn happily. 154 in-service teachers from 22 schools and 4 different districts in Nakhonnayok participated in this teacher training. Most teachers were satisfied with the training content and the trainer. Thereafter, the teachers were given the test to assess the skills and knowledge after training. Most of the teachers earned a score higher than 75%. Accordingly, it can be concluded that after attending the training, teachers have a clear understanding of the contents. After the training session, the teachers have to write a lesson plan that is integrated or adapted to the Sufficiency Economy Philosophy. The teachers can either adopt intradisciplinary or interdisciplinary integration according to their actual teaching conditions in the school. Two weeks after training session, the researchers went to the schools to discuss with the teachers and follow up the assigned integrated lesson plan. It was revealed that the progress of integrated lesson plan could be divided into 3 groups: 1) the teachers who have completed the integrated lesson plan, but are concerned about the accuracy and consistency, 2) teachers who almost complete the lesson plan or made a great progress but are still concerned, confused in some aspects and not fill in the details of the plan, and 3), the teachers who made few progress, are uncertain and confused in many aspects, and may had overloaded tasks from their school. However, a follow-up procedure led to the commitment of teachers to complete the lesson plan. Regarding student learning assessment, from an experiment teaching, most of the students earned a score higher than 50 %. The rate is higher than the one from actual teaching. In addition, the teacher have assessed that the student is happy, enjoys learning, and providing a good cooperates in teaching activities. The students’ interview about the new lesson plan shows that they are happy with it, willing to learn, and able to apply such knowledge in daily life. Integrated lesson plan can increases the educational opportunities for youth in the area.

Keywords: sufficiency, economy, philosophy, integrated education syllabus

Procedia PDF Downloads 186
378 A Visualization Classification Method for Identifying the Decayed Citrus Fruit Infected by Fungi Based on Hyperspectral Imaging

Authors: Jiangbo Li, Wenqian Huang

Abstract:

Early detection of fungal infection in citrus fruit is one of the major problems in the postharvest commercialization process. The automatic and nondestructive detection of infected fruits is still a challenge for the citrus industry. At present, the visual inspection of rotten citrus fruits is commonly performed by workers through the ultraviolet induction fluorescence technology or manual sorting in citrus packinghouses to remove fruit subject with fungal infection. However, the former entails a number of problems because exposing people to this kind of lighting is potentially hazardous to human health, and the latter is very inefficient. Orange is used as a research object. This study would focus on this problem and proposed an effective method based on Vis-NIR hyperspectral imaging in the wavelength range of 400-1000 nm with a spectroscopic resolution of 2.8 nm. In this work, three normalization approaches are applied prior to analysis to reduce the effect of sample curvature on spectral profiles, and it is found that mean normalization was the most effective pretreatment for decreasing spectral variability due to curvature. Then, principal component analysis (PCA) was applied to a dataset composing of average spectra from decayed and normal tissue to reduce the dimensionality of data and observe the ability of Vis-NIR hyper-spectra to discriminate data from two classes. In this case, it was observed that normal and decayed spectra were separable along the resultant first principal component (PC1) axis. Subsequently, five wavelengths (band) centered at 577, 702, 751, 808, and 923 nm were selected as the characteristic wavelengths by analyzing the loadings of PC1. A multispectral combination image was generated based on five selected characteristic wavelength images. Based on the obtained multispectral combination image, the intensity slicing pseudocolor image processing method is used to generate a 2-D visual classification image that would enhance the contrast between normal and decayed tissue. Finally, an image segmentation algorithm for detection of decayed fruit was developed based on the pseudocolor image coupled with a simple thresholding method. For the investigated 238 independent set samples including infected fruits infected by Penicillium digitatum and normal fruits, the total success rate is 100% and 97.5%, respectively, and, the proposed algorithm also used to identify the orange infected by penicillium italicum with a 100% identification accuracy, indicating that the proposed multispectral algorithm here is an effective method and it is potential to be applied in citrus industry.

Keywords: citrus fruit, early rotten, fungal infection, hyperspectral imaging

Procedia PDF Downloads 299
377 A 'Systematic Literature Review' of Specific Types of Inventory Faced by the Management of Firms

Authors: Rui Brito

Abstract:

This contribution regards a literature review of inventory management that is a relevant topic for the firms, due to its important use of capital with implications in firm’s profitability within the complexity of a more competitive and globalized world. Firms look for small inventories in order to reduce holding costs, namely opportunity cost, warehousing and handling costs, deterioration and being out of style, but larger inventories are required by some reasons, such as customer service, ordering cost, transportation cost, supplier’s payment to reduce unit costs or to take advantage of price increase in the near future, and equipment setup cost. Thus, management shall address a trade-off between small inventories and larger inventories. This literature review concerns three types of inventory (spare parts, safety stock, and vendor) whose management usually is beyond the scope of logistics. The applied methodology consisted of an online search of databases regarding scientific documents in English, namely Elsevier, Springer, Emerald, Wiley, and Taylor & Francis, but excluding books except if edited, using search engines, such as Google Scholar and B-on. The search was based on three keywords/strings (themes) which had to be included just as in the article title, suggesting themes were very relevant to the researchers. The whole search period was between 2009 and 2018 with the aim of collecting between twenty and forty studies considered relevant within each of the key words/strings specified. Documents were sorted by relevance and to prevent the exclusion of the more recent articles, based on lower quantity of citations partially due to less time to be cited in new research articles, the search period was divided into two sub-periods (2009-2015 and 2016-2018). The number of surveyed articles by theme showed a variation from 40 to 200 and the number of citations of those articles showed a wider variation from 3 to 216. Selected articles from the three themes were analyzed and the first seven of the first sub-period and the first three of the second sub-period with more citations were read in full to make a synopsis of each article. Overall, the findings show that the majority of article types were models, namely mathematical, although with different sub-types for each theme. Almost all articles suggest further studies, with some mentioning it for their own author(s), which widen the diversity of the previous research. Identified research gaps concern the use of surveys to know which are the models more used by firms, the reasons for not using the models with more performance and accuracy, and which are the satisfaction levels with the outcomes of the inventories management and its effect on the improvement of the firm’s overall performance. The review ends with the limitations and contributions of the study.

Keywords: inventory management, safety stock, spare parts inventory, vendor managed inventory

Procedia PDF Downloads 96
376 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning

Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih

Abstract:

Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.

Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network

Procedia PDF Downloads 186
375 Prediction of Pile-Raft Responses Induced by Adjacent Braced Excavation in Layered Soil

Authors: Linlong Mu, Maosong Huang

Abstract:

Considering excavations in urban areas, the soil deformation induced by the excavations usually causes damage to the surrounding structures. Displacement control becomes a critical indicator of foundation design in order to protect the surrounding structures. Evaluation, the damage potential of the surrounding structures induced by the excavations, usually depends on the finite element method (FEM) because of the complexity of the excavation and the variety of the surrounding structures. Besides, evaluation the influence of the excavation on surrounding structures is a three-dimensional problem. And it is now well recognized that small strain behaviour of the soil influences the responses of the excavation significantly. Three-dimensional FEM considering small strain behaviour of the soil is a very complex method, which is hard for engineers to use. Thus, it is important to obtain a simplified method for engineers to predict the influence of the excavations on the surrounding structures. Based on large-scale finite element calculation with small-strain based soil model coupling with inverse analysis, an empirical method is proposed to calculate the three-dimensional soil movement induced by braced excavation. The empirical method is able to capture the small-strain behaviour of the soil. And it is suitable to be used in layered soil. Then the free-field soil movement is applied to the pile to calculate the responses of the pile in both vertical and horizontal directions. The asymmetric solutions for problems in layered elastic half-space are employed to solve the interactions between soil points. Both vertical and horizontal pile responses are solved through finite difference method based on elastic theory. Interactions among the nodes along a single pile, pile-pile interactions, pile-soil-pile interaction action and soil-soil interactions are counted to improve the calculation accuracy of the method. For passive piles, the shadow effects are also calculated in the method. Finally, the restrictions of the raft on the piles and the soils are summarized as: (1) the summations of the internal forces between the elements of the raft and the elements of the foundation, including piles and soil surface elements, is equal to 0; (2) the deformations of pile heads or of the soil surface elements are the same as the deformations of the corresponding elements of the raft. Validations are carried out by comparing the results from the proposed method with the results from the model tests, FEM and other existing literatures. From the comparisons, it can be seen that the results from the proposed method fit with the results from other methods very well. The method proposed herein is suitable to predict the responses of the pile-raft foundation induced by braced excavation in layered soil in both vertical and horizontal directions when the deformation is small. However, more data is needed to verify the method before it can be used in practice.

Keywords: excavation, pile-raft foundation, passive piles, deformation control, soil movement

Procedia PDF Downloads 231
374 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 51
373 Intelligent Campus Monitoring: YOLOv8-Based High-Accuracy Activity Recognition

Authors: A. Degale Desta, Tamirat Kebamo

Abstract:

Background: Recent advances in computer vision and pattern recognition have significantly improved activity recognition through video analysis, particularly with the application of Deep Convolutional Neural Networks (CNNs). One-stage detectors now enable efficient video-based recognition by simultaneously predicting object categories and locations. Such advancements are highly relevant in educational settings where CCTV surveillance could automatically monitor academic activities, enhancing security and classroom management. However, current datasets and recognition systems lack the specific focus on campus environments necessary for practical application in these settings.Objective: This study aims to address this gap by developing a dataset and testing an automated activity recognition system specifically tailored for educational campuses. The EthioCAD dataset was created to capture various classroom activities and teacher-student interactions, facilitating reliable recognition of academic activities using deep learning models. Method: EthioCAD, a novel video-based dataset, was created with a design science research approach to encompass teacher-student interactions across three domains and 18 distinct classroom activities. Using the Roboflow AI framework, the data was processed, with 4.224 KB of frames and 33.485 MB of images managed for frame extraction, labeling, and organization. The Ultralytics YOLOv8 model was then implemented within Google Colab to evaluate the dataset’s effectiveness, achieving high mean Average Precision (mAP) scores. Results: The YOLOv8 model demonstrated robust activity recognition within campus-like settings, achieving an mAP50 of 90.2% and an mAP50-95 of 78.6%. These results highlight the potential of EthioCAD, combined with YOLOv8, to provide reliable detection and classification of classroom activities, supporting automated surveillance needs on educational campuses. Discussion: The high performance of YOLOv8 on the EthioCAD dataset suggests that automated activity recognition for surveillance is feasible within educational environments. This system addresses current limitations in campus-specific data and tools, offering a tailored solution for academic monitoring that could enhance the effectiveness of CCTV systems in these settings. Conclusion: The EthioCAD dataset, alongside the YOLOv8 model, provides a promising framework for automated campus activity recognition. This approach lays the groundwork for future advancements in CCTV-based educational surveillance systems, enabling more refined and reliable monitoring of classroom activities.

Keywords: deep CNN, EthioCAD, deep learning, YOLOv8, activity recognition

Procedia PDF Downloads 10
372 Discourse Analysis: Where Cognition Meets Communication

Authors: Iryna Biskub

Abstract:

The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.

Keywords: cognition, communication, discourse, strategy

Procedia PDF Downloads 253
371 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 105
370 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor

Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro

Abstract:

Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.

Keywords: control, DC motor, discrete PID, discrete state feedback

Procedia PDF Downloads 266
369 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 112
368 Identification of Potent and Selective SIRT7 Anti-Cancer Inhibitor via Structure-Based Virtual Screening and Molecular Dynamics Simulation

Authors: Md. Fazlul Karim, Ashik Sharfaraz, Aysha Ferdoushi

Abstract:

Background: Computational medicinal chemistry approaches are used for designing and identifying new drug-like molecules, predicting properties and pharmacological activities, and optimizing lead compounds in drug development. SIRT7, a nicotinamide adenine dinucleotide (NAD+)-dependent deacylase which regulates aging, is an emerging target for cancer therapy with mounting evidence that SIRT7 downregulation plays important roles in reversing cancer phenotypes and suppressing tumor growth. Activation or altered expression of SIRT7 is associated with the progression and invasion of various cancers, including liver, breast, gastric, prostate, and non-small cell lung cancer. Objectives: The goal of this work was to identify potent and selective bioactive candidate inhibitors of SIRT7 by in silico screening of small molecule compounds obtained from Nigella sativa (N. sativa). Methods: SIRT7 structure was retrieved from The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB), and its active site was identified using CASTp and metaPocket. Molecular docking simulation was performed with PyRx 0.8 virtual screening software. Drug-likeness properties were tested using SwissADME and pkCSM. In silico toxicity was evaluated by Osiris Property Explorer. Bioactivity was predicted by Molinspiration software. Antitumor activity was screened for Prediction of Activity Spectra for Substances (PASS) using Way2Drug web server. Molecular dynamics (MD) simulation was carried out by Desmond v3.6 package. Results: A total of 159 bioactive compounds from the N. Sativa were screened against the SIRT7 enzyme. Five bioactive compounds: chrysin (CID:5281607), pinocembrin (CID:68071), nigellidine (CID:136828302), nigellicine (CID:11402337), and epicatechin (CID:72276) were identified as potent SIRT7 anti-cancer candidates after docking score evaluation and applying Lipinski's Rule of Five. Finally, MD simulation identified Chrysin as the top SIRT7 anti-cancer candidate molecule. Conclusion: Chrysin, which shows a potential inhibitory effect against SIRT7, can act as a possible anti-cancer drug candidate. This inhibitor warrants further evaluation to check its pharmacokinetics and pharmacodynamics properties both in vitro and in vivo.

Keywords: SIRT7, antitumor, molecular docking, molecular dynamics simulation

Procedia PDF Downloads 79
367 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 163
366 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 121
365 Measuring Oxygen Transfer Coefficients in Multiphase Bioprocesses: The Challenges and the Solution

Authors: Peter G. Hollis, Kim G. Clarke

Abstract:

Accurate quantification of the overall volumetric oxygen transfer coefficient (KLa) is ubiquitously measured in bioprocesses by analysing the response of dissolved oxygen (DO) to a step change in the oxygen partial pressure in the sparge gas using a DO probe. Typically, the response lag (τ) of the probe has been ignored in the calculation of KLa when τ is less than the reciprocal KLa, failing which a constant τ has invariably been assumed. These conventions have now been reassessed in the context of multiphase bioprocesses, such as a hydrocarbon-based system. Here, significant variation of τ in response to changes in process conditions has been documented. Experiments were conducted in a 5 L baffled stirred tank bioreactor (New Brunswick) in a simulated hydrocarbon-based bioprocess comprising a C14-20 alkane-aqueous dispersion with suspended non-viable Saccharomyces cerevisiae solids. DO was measured with a polarographic DO probe fitted with a Teflon membrane (Mettler Toledo). The DO concentration response to a step change in the sparge gas oxygen partial pressure was recorded, from which KLa was calculated using a first order model (without incorporation of τ) and a second order model (incorporating τ). τ was determined as the time taken to reach 63.2% of the saturation DO after the probe was transferred from a nitrogen saturated vessel to an oxygen saturated bioreactor and is represented as the inverse of the probe constant (KP). The relative effects of the process parameters on KP were quantified using a central composite design with factor levels typical of hydrocarbon bioprocesses, namely 1-10 g/L yeast, 2-20 vol% alkane and 450-1000 rpm. A response surface was fitted to the empirical data, while ANOVA was used to determine the significance of the effects with a 95% confidence interval. KP varied with changes in the system parameters with the impact of solid loading statistically significant at the 95% confidence level. Increased solid loading reduced KP consistently, an effect which was magnified at high alkane concentrations, with a minimum KP of 0.024 s-1 observed at the highest solids loading of 10 g/L. This KP was 2.8 fold lower that the maximum of 0.0661 s-1 recorded at 1 g/L solids, demonstrating a substantial increase in τ from 15.1 s to 41.6 s as a result of differing process conditions. Importantly, exclusion of KP in the calculation of KLa was shown to under-predict KLa for all process conditions, with an error up to 50% at the highest KLa values. Accurate quantification of KLa, and therefore KP, has far-reaching impact on industrial bioprocesses to ensure these systems are not transport limited during scale-up and operation. This study has shown the incorporation of τ to be essential to ensure KLa measurement accuracy in multiphase bioprocesses. Moreover, since τ has been conclusively shown to vary significantly with process conditions, it has also been shown that it is essential for τ to be determined individually for each set of process conditions.

Keywords: effect of process conditions, measuring oxygen transfer coefficients, multiphase bioprocesses, oxygen probe response lag

Procedia PDF Downloads 266
364 Simulation of Wet Scrubbers for Flue Gas Desulfurization

Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra

Abstract:

Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.

Keywords: desulfurization, discrete phase, scrubber, wall film

Procedia PDF Downloads 264
363 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 178
362 Tertiary Level Teachers' Beliefs about Codeswitching

Authors: Hoa Pham

Abstract:

Code switching, which can be described as the use of students’ first language in second language classrooms, has long been a controversial topic in the area of language teaching and second language acquisition. While this has been widely investigated across different contexts, little empirical research has been undertaken in Vietnam. The findings of this study contribute to our understanding of bilingual discourse and code switching practices in content and language integrated classrooms, which has significant implications for language teaching and learning in general and in particular for language pedagogy at tertiary level in Vietnam. This study examines the accounts the teachers articulated for their code switching practices in content-based Business English in Vietnam. Data were collected from five teachers through the use of stimulated recall interviews facilitated by the video data to garner the teachers' cognitive reflection, and allowed them to vocalise the motivations behind their code switching behaviour in particular contexts. The literature has recommended that when participants are provided with a large amount of stimuli or cues, they will experience an original situation again in their imagination with great accuracy. This technique can also provide a valuable "insider" perspective on the phenomenon under investigation which complements the researcher’s "outsider" observation. This can create a relaxed atmosphere during the interview process, which in turn promotes the collection of rich and diverse data. Also, participants can be empowered by this technique as they can raise their own concerns and discuss instances which they find important or interesting. The data generated through this study were analysed using a constant comparative approach. The study found that the teachers indicated their support for the use of code switching in their pedagogical practices. Particularly, as a pedagogical resource, the teachers saw code switching to the L1 playing a key role in facilitating the students' comprehension of both content knowledge and the target language. They believed the use of the L1 accommodates the students' current language competence and content knowledge. They also expressed positive opinions about the role that code switching plays in stimulating students' schematic language and content knowledge, encouraging retention and interest in learning and promoting a positive affective environment in the classroom. The teachers perceived that their use of code switching to the L1 helps them meet the students' language needs and prepares them for their study in subsequent courses and addresses functional needs so that students can cope with English language use outside the classroom. Several factors shaped the teachers' perceptions of their code switching practices, including their accumulated teaching experience, their previous experience as language learners, their theoretical understanding of language teaching and learning, and their knowledge of the teaching context. Code switching was a typical phenomenon in the observed classes and was supported by the teachers in certain contexts. This study reinforces the call in the literature to recognise this practice as a useful instructional resource.

Keywords: codeswitching, language teaching, teacher beliefs, tertiary level

Procedia PDF Downloads 451
361 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning

Authors: James Gallagher, Phillip Benachour

Abstract:

As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.

Keywords: context aware, location aware, mobile learning, remote viewing

Procedia PDF Downloads 291
360 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 130
359 Isolation of Clitorin and Manghaslin from Carica papaya L. Leaves by CPC and Its Quantitative Analysis by QNMR

Authors: Norazlan Mohmad Misnan, Maizatul Hasyima Omar, Mohd Isa Wasiman

Abstract:

Papaya (Carica papaya L., Caricaceae) is a tree which mainly cultivated for its fruits in many tropical regions including Australia, Brazil, China, Hawaii, and Malaysia. Beside of fruits, its leaves, seeds, and latex have also been traditionally used for treating diseases, which also reported to possess anti-cancer and anti- malaria properties. Its leaves have been reported to consist of various chemical compounds such as alkaloids, flavonoids and phenolics. Clitorin and manghaslin are among major flavonoids presence. Thus, the aim of this study is to quantify the purity of these isolated compounds (clitorin and manghsalin) by using quantitative Nuclear Magnetic Resonance (qNMR) analysis. Only fresh C. papaya leaves were used for juice extraction procedure and subsequently was freeze-dried to obtain a dark green powdered form of the extract prior to Centrifugal Partition Chromatography (CPC) separation. The CPC experiments were performed using a two-phase solvent system comprising ethyl acetate/butanol/water (1:4:5, v/v/v/v) solvent. The upper organic phase was used as the stationary phase, and the lower aqueous phase was employed as the mobile phase. Ten fractions were obtained after an hour runtime analysis. Fraction 6 and fraction 8 has been identified as clitorin (m/z 739.21 [M-H]-) and manghaslin (m/z 755.21 [M-H]-), respectively, based on LCMS data and full analysis of NMR (1H NMR, 13C NMR, HMBC, and HSQC). The 1H-qNMR measurements were carried out using a 400 MHz NMR spectrometer (JEOL ECS 400MHz, Japan) and deuterated methanol was used as a solvent. Quantification was performed using the AQARI method (Accurate Quantitative NMR) with deuterated 1,4-Bis(trimethylsilyl)benzene (BTMSB) as an internal reference substances. This AQARI protocol includes not only NMR measurement but also sample preparation that provide highest precision and accuracy than other qNMR methods. The 90° pulse length and the T1 relaxation times for compounds and BTMSB were determined prior to the quantification to give the best signal-to-noise ratio. Regions containing the two downfield signals from aromatic part (6.00–6.89 ppm), and the singlet signal, (18H) arising from BTMSB (0.63-1.05ppm) were selected for integration. The purity of clitorin and manghaslin were calculated to be 52.22% and 43.36%, respectively. Further purification is needed in order to increase its purity. This finding has demonstrated the use of qNMR for quality control and standardization of various plant extracts and which can be applied for NMR fingerprinting of other plant-based products with good reproducibility and in the case where commercial standards is not readily available.

Keywords: Carica papaya, clitorin, manghaslin, quantitative Nuclear Magnetic Resonance, Centrifugal Partition Chromatography

Procedia PDF Downloads 496