Search results for: optimal digital signal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10279

Search results for: optimal digital signal processing

1129 Management Methods of Food Losses in Polish Processing Plants

Authors: Beata Bilska, Marzena Tomaszewska, Danuta Kolozyn-Krajewska

Abstract:

Food loss and food waste are a global problem of the modern economy. The research undertaken aimed to analyze how food is handled in catering establishments when it comes to food waste and to demonstrate the main ways of management with foods/dishes not served to consumers. A survey study was conducted from January to June 2019. The selection of catering establishments participating in the study was deliberate. The study included establishments located only in Mazowieckie Voivodeship (Poland). Forty-two completed questionnaires were collected. In some questions, answers were based on a 5-point scale of 1 to 5 (from "always" / "every day" to "never"). The survey also included closed questions with a suggested cafeteria of answers. The respondents stated that in their workplaces, dishes served cold and hot ready meals are discarded every day or almost every day (23.7% and 20.5% of answers respectively). A procedure most frequently used for dealing with dishes not served to consumers on a given day is their storage at a cool temperature until the following day. In the research, 1/5 of respondents admitted that consumers "always" or "usually" leave uneaten meals on their plates, and over 41% "sometimes" do so. It was found additionally that food not used in the foodservice sector is most often thrown into a public container for rubbish. Most often thrown into the public container (with communal trash) were: expired products (80.0%), plate waste (80.0%) and inedible products (fruit and vegetable peels, eggshells) (77.5%). Most frequently into the container dedicated only to food waste were thrown out used deep-frying oil (62.5%). 10% of respondents indicated that inedible products in their workplaces are allocated for animal feeds. Food waste in the foodservice sector remains an insufficiently studied issue, as owners of these objects are often unwilling to disclose data about the subject. Incorrect ways of management with foods not served to consumers were observed. There is a need to develop educational activities for employees and management in the context of food waste management in the foodservice sector.

Keywords: food waste, inedible products, plate waste, used deep-frying oil

Procedia PDF Downloads 127
1128 Economic Policy to Promote small and Medium-sized Enterprises in Georgia in the Post-Pandemic Period

Authors: Gulnaz Erkomaishvili

Abstract:

Introduction: The paper assesses the impact of the COVID-19 pandemic on the activities of small and medium-sized enterprises in Georgia, identifies their problems, and analyzes the state economic policy measures. During the pandemic, entrepreneurs named the imposition of restrictions, access to financial resources, shortage of qualified personnel, high tax rates, unhealthy competition in the market, etc. as the main challenges. The Georgian government has had to take special measures to mitigate the crisis impact caused by the pandemic. For example - in 2020, they mobilized more than 1,6 billion Gel for various eventsto support entrepreneurs. Small and medium-sized entrepreneurship development strategy is presented based on the research; Corresponding conclusions are made, and recommendations are developed. Objectives: The object of research is small and medium-sized enterprises and economic-political decisions aimed at their promotion.Methodology: This paper uses general and specific methods, in particular, analysis, synthesis, induction, deduction, scientific abstraction, comparative and statistical methods, as well as experts’ evaluation. In-depth interviews with experts were conducted to determine quantitative and qualitative indicators; Publications of the National Statistics Office of Georgia are used to determine the regularity between analytical and statistical estimations. Also, theoretical and applied research of international organizations and scientist-economists are used. Contributions: The COVID-19pandemic has had a significant impact on small and medium-sized enterprises. For them, Lockdown is a major challenge. Total sales volume decreased. At the same time, the innovative capabilities of enterprises and the volume of sales in remote channels have increased. As for the assessment of state support measures by small and medium-sizedentrepreneurs, despite the existence of support programs, a large number of entrepreneurs still do not evaluate the measures taken by the state positively. Among the desirable measures to be taken by the state, which would improve the activities of small and medium-sized entrepreneurs, who negatively or largely negatively assessed the activity of the state, named: tax incentives/exemption from certain taxes at the initial stage; Need for periodic trainings/organization of digital technologies, marketing training courses to improve the qualification of employees; Logic and adequacy of criteria when awarding grants and funding; Facilitating the finding of investors; Less bureaucracy, etc.

Keywords: small and medium enterprises, small and medium entrepreneurship, economic policy for small and medium entrepreneurship development, government regulations in Georgia, COVID-19 pandemic

Procedia PDF Downloads 155
1127 A Location-Based Search Approach According to Users’ Application Scenario

Authors: Shih-Ting Yang, Chih-Yun Lin, Ming-Yu Li, Jhong-Ting Syue, Wei-Ming Huang

Abstract:

Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.

Keywords: data mining, knowledge management, location-based service, user application scenario

Procedia PDF Downloads 125
1126 The Impact of Gender Difference on Crop Productivity: The Case of Decha Woreda, Ethiopia

Authors: Getinet Gezahegn Gebre

Abstract:

The study examined the impact of gender differences on Crop productivity in Decha woreda of south west Kafa zone, located 140 Km from Jimma Town and 460 km south west of Addis Ababa, between Bonga town and Omo River. The specific objectives were to assess the extent to which the agricultural production system is gender oriented, to examine access and control over productive resources, and to estimate men’s and women’s productivity in agriculture. Cross-sectional data collected from a total of 140 respondents were used in this study, whereby 65 were female headed and 75 were male headed households. The data were analyzed by using Statistical Package for Social Science (SPSS). Descriptive statistics such as frequency, mean, percentage, t-test, and chi-square were used to summarize and compare the information between the two groups. Moreover, Cobb-Douglas(CD) production function was to estimate the productivity difference in agriculture between male and female headed households. Results of the study showed that male headed households (MHH) own more productive resources such as land, livestock, labor, and other agricultural inputs as compared to female headed households (FHH). Moreover, the estimate of CD production function shows that livestock, herbicide use, land size, and male labor were statistically significant for MHH, while livestock, land size, herbicides use and female labor were significant variables for FHH. The crop productivity difference between MHH and FHH was about 68.83% in the study area. However, if FHH had equal access to the inputs as MHH, the gross value of the output would be higher by 23.58% for FHH. This might suggest that FHH would be more productive than MHH if they had equal access to inputs as MHH. Based on the results obtained, the following policy implication can be drawn: accessing FHH to inputs that increase the productivity of agriculture, such as herbicides, livestock, and male labor; increasing the productivity of land; and introducing technologies that reduce the time and energy of women, especially for inset processing.

Keywords: gender difference, crop, productivity, efficiency

Procedia PDF Downloads 98
1125 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors

Authors: Dylan Elliott, Katya Pertsova

Abstract:

Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.

Keywords: BERT, grammatical error correction, preposition error detection, prepositions

Procedia PDF Downloads 148
1124 Astronomy in the Education Area: A Narrative Review

Authors: Isabella Lima Leite de Freitas

Abstract:

The importance of astronomy for humanity is unquestionable. Despite being a robust science, capable of bringing new discoveries every day and quickly increasing the ability of researchers to understand the universe more deeply, scientific research in this area can also help in various applications outside the domain of astronomy. The objective of this study was to review and conduct a descriptive analysis of published studies that presented the importance of astronomy in the area of education. A narrative review of the literature has been performed, considering the articles published in the last five years. As astronomy involves the study of physics, chemistry, biology, mathematics and technology, one of the studies evaluated presented astronomy as the gateway to science, demonstrating the presence of astronomy in 52 school curricula in 37 countries, with celestial movement the dominant content area. Another intervention study, evaluating individuals aged 4-5 years, demonstrated that the attribution of personal characteristics to cosmic bodies, in addition to the use of comprehensive astronomy concepts, favored the learning of science in preschool-age children, considering the use of practical activities of accompaniment and free drawing. Aiming to measure scientific literacy, another study developed in Turkey, motivated the authorities of this country to change the teaching materials and curriculum of secondary schools after the term “astronomy” appeared as one of the most attractive subjects for young people aged 15 to 24. There are also reports in the literature of the use of pedagogical tools, such as the representation of the Solar System on a human scale, where students can walk along the orbits of the planets while studying the laws of dynamics. The use of this tool favored the teaching of the relationship between distance, duration and speed over the period of the planets, in addition to improving the motivation and well-being of students aged 14-16. An important impact of astronomy on education was demonstrated in the study that evaluated the participation of high school students in the Astronomical Olympiads and the International Astronomy Olympiad. The study concluded that these Olympics have considerable influence on students who pursue a career in teaching or research later on, many of whom are in the area of astronomy itself. In addition, the literature indicates that the teaching of astronomy in the digital age has facilitated the availability of data for researchers, but also for the general population. This fact can increase even more the curiosity that the astronomy area has always instilled in people and promote the dissemination of knowledge on an expanded scale. Currently, astronomy has been considered an important ally in strengthening the school curricula of children, adolescents and young adults. This has been used as teaching tools, in addition to being extremely useful for scientific literacy, being increasingly used in the area of education.

Keywords: astronomy, education area, teaching, review

Procedia PDF Downloads 107
1123 Processing and Evaluation of Jute Fiber Reinforced Hybrid Composites

Authors: Mohammad W. Dewan, Jahangir Alam, Khurshida Sharmin

Abstract:

Synthetic fibers (carbon, glass, aramid, etc.) are generally utilized to make composite materials for better mechanical and thermal properties. However, they are expensive and non-biodegradable. In the perspective of Bangladesh, jute fibers are available, inexpensive, and comprising good mechanical properties. The improved properties (i.e., low cost, low density, eco-friendly) of natural fibers have made them a promising reinforcement in hybrid composites without sacrificing mechanical properties. In this study, jute and e-glass fiber reinforced hybrid composite materials are fabricated utilizing hand lay-up followed by a compression molding technique. Room temperature cured two-part epoxy resin is used as a matrix. Approximate 6-7 mm thick composite panels are fabricated utilizing 17 layers of woven glass and jute fibers with different fiber layering sequences- only jute, only glass, glass, and jute alternatively (g/j/g/j---) and 4 glass - 9 jute – 4 glass (4g-9j-4g). The fabricated composite panels are analyzed through fiber volume calculation, tensile test, bending test, and water absorption test. The hybridization of jute and glass fiber results in better tensile, bending, and water absorption properties than only jute fiber-reinforced composites, but inferior properties as compared to only glass fiber reinforced composites. Among different fiber layering sequences, 4g-9j-4g fibers layering sequence resulted in better tensile, bending, and water absorption properties. The effect of chemical treatment on the woven jute fiber and chopped glass microfiber infusion are also investigated in this study. Chemically treated jute fiber and 2 wt. % chopped glass microfiber infused hybrid composite shows about 12% improvements in flexural strength as compared to untreated and no micro-fiber infused hybrid composite panel. However, fiber chemical treatment and micro-filler do not have a significant effect on tensile strength.

Keywords: compression molding, chemical treatment, hybrid composites, mechanical properties

Procedia PDF Downloads 159
1122 The Impact of Sign Language on Generating and Maintaining a Mental Image

Authors: Yi-Shiuan Chiu

Abstract:

Deaf signers have been found to have better mental image performance than hearing nonsigners. The goal of this study was to investigate the ability to generate mental images, to maintain them, and to manipulate them in deaf signers of Taiwanese Sign Language (TSL). In the visual image task, participants first memorized digits formed in a cell of 4 × 5 grids. After presenting a cue of Chinese digit character shown on the top of a blank cell, participants had to form a corresponding digit. When showing a probe, which was a grid containing a red circle, participants had to decide as quickly as possible whether the probe would have been covered by the mental image of the digit. The ISI (interstimulus interval) between cue and probe was manipulated. In experiment 1, 24 deaf signers and 24 hearing nonsigners were asked to perform image generation tasks (ISI: 200, 400 ms) and image maintenance tasks (ISI: 800, 2000 ms). The results showed that deaf signers had had an enhanced ability to generate and maintain a mental image. To explore the process of mental image, in experiment 2, 30 deaf signers and 30 hearing nonsigners were asked to do visual searching when maintaining a mental image. Between a digit image cue and a red circle probe, participants were asked to search a visual search task to see if a target triangle apex was directed to the right or left. When there was only one triangle in the searching task, the results showed that both deaf signers and hearing non-signers had similar visual searching performance in which the searching targets in the mental image locations got facilitates. However, deaf signers could maintain better and faster mental image performance than nonsigners. In experiment 3, we increased the number of triangles to 4 to raise the difficulty of the visual search task. The results showed that deaf participants performed more accurately in visual search and image maintenance tasks. The results suggested that people may use eye movements as a mnemonic strategy to maintain the mental image. And deaf signers had enhanced abilities to resist the interference of eye movements in the situation of fewer distractors. In sum, these findings suggested that deaf signers had enhanced mental image processing.

Keywords: deaf signers, image maintain, mental image, visual search

Procedia PDF Downloads 155
1121 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 420
1120 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 158
1119 Molecular Topology and TLC Retention Behaviour of s-Triazines: QSRR Study

Authors: Lidija R. Jevrić, Sanja O. Podunavac-Kuzmanović, Strahinja Z. Kovačević

Abstract:

Quantitative structure-retention relationship (QSRR) analysis was used to predict the chromatographic behavior of s-triazine derivatives by using theoretical descriptors computed from the chemical structure. Fundamental basis of the reported investigation is to relate molecular topological descriptors with chromatographic behavior of s-triazine derivatives obtained by reversed-phase (RP) thin layer chromatography (TLC) on silica gel impregnated with paraffin oil and applied ethanol-water (φ = 0.5-0.8; v/v). Retention parameter (RM0) of 14 investigated s-triazine derivatives was used as dependent variable while simple connectivity index different orders were used as independent variables. The best QSRR model for predicting RM0 value was obtained with simple third order connectivity index (3χ) in the second-degree polynomial equation. Numerical values of the correlation coefficient (r=0.915), Fisher's value (F=28.34) and root mean square error (RMSE = 0.36) indicate that model is statistically significant. In order to test the predictive power of the QSRR model leave-one-out cross-validation technique has been applied. The parameters of the internal cross-validation analysis (r2CV=0.79, r2adj=0.81, PRESS=1.89) reflect the high predictive ability of the generated model and it confirms that can be used to predict RM0 value. Multivariate classification technique, hierarchical cluster analysis (HCA), has been applied in order to group molecules according to their molecular connectivity indices. HCA is a descriptive statistical method and it is the most frequently used for important area of data processing such is classification. The HCA performed on simple molecular connectivity indices obtained from the 2D structure of investigated s-triazine compounds resulted in two main clusters in which compounds molecules were grouped according to the number of atoms in the molecule. This is in agreement with the fact that these descriptors were calculated on the basis of the number of atoms in the molecule of the investigated s-triazine derivatives.

Keywords: s-triazines, QSRR, chemometrics, chromatography, molecular descriptors

Procedia PDF Downloads 394
1118 Multi-Objective Optimization of the Thermal-Hydraulic Behavior for a Sodium Fast Reactor with a Gas Power Conversion System and a Loss of off-Site Power Simulation

Authors: Avent Grange, Frederic Bertrand, Jean-Baptiste Droin, Amandine Marrel, Jean-Henry Ferrasse, Olivier Boutin

Abstract:

CEA and its industrial partners are designing a gas Power Conversion System (PCS) based on a Brayton cycle for the ASTRID Sodium-cooled Fast Reactor. Investigations of control and regulation requirements to operate this PCS during operating, incidental and accidental transients are necessary to adapt core heat removal. To this aim, we developed a methodology to optimize the thermal-hydraulic behavior of the reactor during normal operations, incidents and accidents. This methodology consists of a multi-objective optimization for a specific sequence, whose aim is to increase component lifetime by reducing simultaneously several thermal stresses and to bring the reactor into a stable state. Furthermore, the multi-objective optimization complies with safety and operating constraints. Operating, incidental and accidental sequences use specific regulations to control the thermal-hydraulic reactor behavior, each of them is defined by a setpoint, a controller and an actuator. In the multi-objective problem, the parameters used to solve the optimization are the setpoints and the settings of the controllers associated with the regulations included in the sequence. In this way, the methodology allows designers to define an optimized and specific control strategy of the plant for the studied sequence and hence to adapt PCS piloting at its best. The multi-objective optimization is performed by evolutionary algorithms coupled to surrogate models built on variables computed by the thermal-hydraulic system code, CATHARE2. The methodology is applied to a loss of off-site power sequence. Three variables are controlled: the sodium outlet temperature of the sodium-gas heat exchanger, turbomachine rotational speed and water flow through the heat sink. These regulations are chosen in order to minimize thermal stresses on the gas-gas heat exchanger, on the sodium-gas heat exchanger and on the vessel. The main results of this work are optimal setpoints for the three regulations. Moreover, Proportional-Integral-Derivative (PID) control setting is considered and efficient actuators used in controls are chosen through sensitivity analysis results. Finally, the optimized regulation system and the reactor control procedure, provided by the optimization process, are verified through a direct CATHARE2 calculation.

Keywords: gas power conversion system, loss of off-site power, multi-objective optimization, regulation, sodium fast reactor, surrogate model

Procedia PDF Downloads 309
1117 Designing of Food Products with Seasoned Plant Components Assigned for Obese and Diabetic Individuals

Authors: A. Gramza-Michałowska, J. Skręty, M. Antczak, J. Kobus-Cisowska, D. Kmiecik, J. Korczak, Kulczyński Bartosz

Abstract:

Background: Modern consumer highly appreciates the correlation between eating habits and health. Intensified research showed many proofs confirming that food besides its basic nutritional function, possess also significant prophylactic and therapeutic potential. Preventive potential of selected food is commonly used as improvement factor of patients life standard. World Health Organization indicates that diabetes (Diabetes mellitus) and obesity are two of the most common and dangerous diseases. Diet therapy is an element of diabetes education program and a part of healing process, allowing maintaining and remaining the optimal metabolic state of the system. It must be remembered that diabetes treatment should be individualized to each patient. One of highly recommended vegetable for diabetes is asparagus (Asparagus officinalis L.), low calories common plant, growing in European countries. Objective: To propose the technology of unsweetened muesli production with addition of new components, we investigated the effects of selected vegetable addition on antioxidative capacity and consumer’s acceptance of muesli as representative of breakfast product. Methods: Muesli was formulated from a composition of oat flakes, flaxseed, bran, carrots, broccoli and asparagus. Basic composition of muesli was evaluated as content of protein, lipids, fatty acid composition, ash, selected minerals and caloricity. Antioxidant capacity of muesli was evaluated with use radical scavenging methods (DPPH, ABTS), ORAC value and PCL - photochemiluminescence antiradical potential. Proposed muesli as new product was also characterized with sensory analysis, which included color, scent, taste, consistency and overall acceptance of a product. Results: Results showed that addition of freeze-dried asparagus into muesli allowed to lower the fat content and caloricity of a product according to the base product. No significant loss in antioxidant potential was evaluated, also the sensory value of a product was not negative. Conclusion: Designed muesli would be an answer for obese people looking for healthy snack during the daytime. Results showed that product with asparagus addition would be accepted by the consumers and because of its antidiabetic potential could be a n important factor in prevention of diabetes or obesity. Financial support by the UE Project no PO IG 01.01.00.00-061/09

Keywords: muesli, vegetables, asparagus, antioxidant potential, lipids

Procedia PDF Downloads 318
1116 Brand Resonance Strategy For Long-term Market Survival: Does The Brand Resonance Matter For Smes? An Investigation In Smes Digital Branding (Facebook, Twitter, Instagram And Blog) Activities And Strong Brand Development

Authors: Noor Hasmini Abd Ghani

Abstract:

Brand resonance is among of new focused strategy that getting more attention in nowadays by larger companies for their long-term market survival. The brand resonance emphasizing of two main characteristics that are intensity and activity able to generate psychology bond and enduring relationship between a brand and consumer. This strong attachment relationship has represented brand resonance with the concept of consumer brand relationship (CBR) that exhibit competitive advantage for long-term market survival. The main consideration toward this brand resonance approach is not only in the context of larger companies but also can be adapted in Small and Medium Enterprises (SMEs) as well. The SMEs have been recognized as vital pillar to the world economy in both developed and emergence countries are undeniable due to their economic growth contributions, such as opportunity for employment, wealth creation, and poverty reduction. In particular, the facts that SMEs in Malaysia are pivotal to the well-being of the Malaysian economy and society are clearly justified, where the SMEs competent in provided jobs to 66% of the workforce and contributed 40% to the GDP. As regards to it several sectors, the SMEs service category that covers the Food & Beverage (F&B) sector is one of the high-potential industries in Malaysia. For that reasons, SMEs strong brand or brand equity is vital to be developed for their long-term market survival. However, there’s still less appropriate strategies in develop their brand equity. The difficulties have never been so evident until Covid-19 swept across the globe from 2020. Since the pandemic began, more than 150,000 SMEs in Malaysia have shut down, leaving more than 1.2 million people jobless. Otherwise, as the SMEs are the pillar of any economy for the countries in the world, and with negative effect of COVID-19 toward their economic growth, thus, their protection has become important more than ever. Therefore, focusing on strategy that able to develop SMEs strong brand is compulsory. Hence, this is where the strategy of brand resonance is introduced in this study. Mainly, this study aims to investigate the impact of CBR as a predictor and mediator in the context of social media marketing (SMM) activities toward SMEs e-brand equity (or strong brand) building. The study employed the quantitative research design concerning on electronic survey method with the valid response rate of 300 respondents. Interestingly, the result revealed the importance role of CBR either as predictor or mediator in the context of SMEs SMM as well as brand equity development. Further, the study provided several theoretical and practical implications that can benefit the SMEs in enhancing their strategic marketing decision.

Keywords: SME brand equity, SME social media marketing, SME consumer brand relationship, SME brand resonance

Procedia PDF Downloads 60
1115 An Evaluation of the Influence of Corn Cob Ash on the Strength Parameters of Lateritic SoiLs

Authors: O. A. Apampa, Y. A. Jimoh

Abstract:

The paper reports the investigation of Corn Cob Ash as a chemical stabilizing agent for laterite soils. Corn cob feedstock was obtained from Maya, a rural community in the derived savannah agro-ecological zone of South-Western Nigeria and burnt to ashes of pozzolanic quality. Reddish brown silty clayey sand material characterized as AASHTO A-2-6(3) lateritic material was obtained from a borrow pit in Abeokuta and subjected to strength characterization tests according to BS 1377: 2000. The soil was subsequently mixed with CCA in varying percentages of 0-7.5% at 1.5% intervals. The influence of CCA stabilized soil was determined for the Atterberg limits, compaction characteristics, CBR and the unconfined compression strength. The tests were repeated on laterite cement-soil mixture in order to establish a basis for comparison. The result shows a similarity in the compaction characteristics of soil-cement and soil-CCA. With increasing addition of binder from 1.5% to 7.5%, Maximum Dry Density progressively declined while the OMC steadily increased. For the CBR, the maximum positive impact was observed at 1.5% CCA addition at a value of 85% compared to the control value of 65% for the cement stabilization, but declined steadily thereafter with increasing addition of CCA, while that of soil-cement continued to increase with increasing addition of cement beyond 1.5% though at a relatively slow rate. Similar behavior was observed in the UCS values for the soil-CCA mix, increasing from a control value of 0.4 MN/m2 to 1.0 MN/m2 at 1.5% CCA and declining thereafter, while that for soil-cement continued to increase with increasing cement addition, but at a slower rate. This paper demonstrates that CCA is effective for chemical stabilization of a typical Nigerian AASHTO A-2-6 lateritic soil at maximum stabilizer content limit of 1.5% and therefore recommends its use as a way of finding further application for agricultural waste products and achievement of environmental sustainability in line with the ideals of the millennium development goals because of the economic and technical feasibility of the processing of the cobs from corn.

Keywords: corn cob ash, pozzolan, cement, laterite, stabilizing agent, cation exchange capacity

Procedia PDF Downloads 300
1114 Corn Flakes Produced from Different Cultivars of Zea Mays as a Functional Product

Authors: Milenko Košutić, Jelena Filipović, Zvonko Nježić

Abstract:

Extrusion technology is thermal processing that is applied to improve the nutritional, hygienic, and physical-chemical characteristics of the raw material. Overall, the extrusion process is an efficient method for the production of a wide range of food products. It combines heat, pressure, and shear to transform raw materials into finished goods with desired textures, shapes, and nutritional profiles. The extruded products’ quality is remarkably dependent upon feed material composition, barrel temperature profile, feed moisture content, screw speed, and other extrusion system parameters. Given consumer expectations for snack foods, a high expansion index and low bulk density, in addition to crunchy texture and uniform microstructure, are desired. This paper investigates the effects of simultaneous different types of corn (white corn, yellow corn, red corn, and black corn) addition and different screw speed (350, 500, 650 rpm) on the physical, technological, and functional properties of flakes products. Black corn flour and screw speed at 350 rpm positively influenced physical, technological characteristics, mineral composition, and antioxidant properties of flake products with the best total score analysis of 0,59. Overall, the combination of Tukey's HSD test and PCA enables a comprehensive analysis of the observed corn products, allowing researchers to identify them. This research aims to analyze the influence of different types of corn flour (white corn, yellow corn, red corn, and black corn) on the nutritive and sensory properties of the product (quality, texture, and color), as well as the acceptance of the new product by consumers on the territory of Novi Sad. The presented data point that investigated corn flakes from black corn flour at 350 rpm is a product with good physical-technological and functional properties due to a higher level of antioxidant activity.

Keywords: corn types, flakes product, nutritive quality, acceptability

Procedia PDF Downloads 58
1113 Enhancing Health Information Management with Smart Rings

Authors: Bhavishya Ramchandani

Abstract:

A little electronic device that is worn on the finger is called a smart ring. It incorporates mobile technology and has features that make it simple to use the device. These gadgets, which resemble conventional rings and are usually made to fit on the finger, are outfitted with features including access management, gesture control, mobile payment processing, and activity tracking. A poor sleep pattern, an irregular schedule, and bad eating habits are all part of the problems with health that a lot of people today are facing. Diets lacking fruits, vegetables, legumes, nuts, and whole grains are common. Individuals in India also experience metabolic issues. In the medical field, smart rings will help patients with problems relating to stomach illnesses and the incapacity to consume meals that are tailored to their bodies' needs. The smart ring tracks all bodily functions, including blood sugar and glucose levels, and presents the information instantly. Based on this data, the ring generates what the body will find to be perfect insights and a workable site layout. In addition, we conducted focus groups and individual interviews as part of our core approach and discussed the difficulties they're having maintaining the right diet, as well as whether or not the smart ring will be beneficial to them. However, everyone was very enthusiastic about and supportive of the concept of using smart rings in healthcare, and they believed that these rings may assist them in maintaining their health and having a well-balanced diet plan. This response came from the primary data, and also working on the Emerging Technology Canvas Analysis of smart rings in healthcare has led to a significant improvement in our understanding of the technology's application in the medical field. It is believed that there will be a growing demand for smart health care as people become more conscious of their health. The majority of individuals will finally utilize this ring after three to four years when demand for it will have increased. Their daily lives will be significantly impacted by it.

Keywords: smart ring, healthcare, electronic wearable, emerging technology

Procedia PDF Downloads 64
1112 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 181
1111 Real Estate Trend Prediction with Artificial Intelligence Techniques

Authors: Sophia Liang Zhou

Abstract:

For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.

Keywords: linear regression, random forest, artificial neural network, real estate price prediction

Procedia PDF Downloads 103
1110 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 142
1109 Design of Traffic Counting Android Application with Database Management System and Its Comparative Analysis with Traditional Counting Methods

Authors: Muhammad Nouman, Fahad Tiwana, Muhammad Irfan, Mohsin Tiwana

Abstract:

Traffic congestion has been increasing significantly in major metropolitan areas as a result of increased motorization, urbanization, population growth and changes in the urban density. Traffic congestion compromises efficiency of transport infrastructure and causes multiple traffic concerns; including but not limited to increase of travel time, safety hazards, air pollution, and fuel consumption. Traffic management has become a serious challenge for federal and provincial governments, as well as exasperated commuters. Effective, flexible, efficient and user-friendly traffic information/database management systems characterize traffic conditions by making use of traffic counts for storage, processing, and visualization. While, the emerging data collection technologies continue to proliferate, its accuracy can be guaranteed through the comparison of observed data with the manual handheld counters. This paper presents the design of tablet based manual traffic counting application and framework for development of traffic database management system for Pakistan. The database management system comprises of three components including traffic counting android application; establishing online database and its visualization using Google maps. Oracle relational database was chosen to develop the data structure whereas structured query language (SQL) was adopted to program the system architecture. The GIS application links the data from the database and projects it onto a dynamic map for traffic conditions visualization. The traffic counting device and example of a database application in the real-world problem provided a creative outlet to visualize the uses and advantages of a database management system in real time. Also, traffic data counts by means of handheld tablet/ mobile application can be used for transportation planning and forecasting.

Keywords: manual count, emerging data sources, traffic information quality, traffic surveillance, traffic counting device, android; data visualization, traffic management

Procedia PDF Downloads 196
1108 Geophysical Mapping of Anomalies Associated with Sediments of Gwandu Formation Around Argungu and Its Environs NW, Nigeria

Authors: Adamu Abubakar, Abdulganiyu Yunusa, Likkason Othniel Kamfani, Abdulrahman Idris Augie

Abstract:

This research study is being carried out in accordance with the Gwandu formation's potential exploratory activities in the inland basin of northwest Nigeria.The present research aims to identify and characterize subsurface anomalies within Gwandu formation using electrical resistivity tomography (ERT) and magnetic surveys, providing valuable insights for mineral exploration. The study utilizes various data enhancement techniques like derivatives, upward continuation, and spectral analysis alongside 2D modeling of electrical imaging profiles to analyze subsurface structures and anomalies. Data was collected through ERT and magnetic surveys, with subsequent processing including derivatives, spectral analysis, and 2D modeling. The results indicate significant subsurface structures such as faults, folds, and sedimentary layers. The study area's geoelectric and magnetic sections illustrate the depth and distribution of sedimentary formations, enhancing understanding of the geological framework. Thus, showed that the entire formations of Eocene sediment of Gwandu are overprinted by the study area's Tertiary strata. The NE to SW and E to W cross-profile for the pseudo geoelectric sections beneath the study area were generated using a two-dimensional (2D) electrical resistivity imaging. 2D magnetic modelling, upward continuation, and derivative analysis are used to delineate the signatures of subsurface magnetic anomalies. The results also revealed The sediment thickness by surface depth ranges from ∼4.06 km and ∼23.31 km. The Moho interface, the lower and upper mantle crusts boundary, and magnetic crust are all located at depths of around ∼10.23 km. The vertical distance between the local models of the foundation rocks to the north and south of the Sokoto Group was approximately ∼6 to ∼8 km and ∼4.5 km, respectively.

Keywords: high-resolution aeromagnetic data, electrical resistivity imaging, subsurface anomalies, 2d dorward modeling

Procedia PDF Downloads 17
1107 Opportunities and Challenges in Midwifery Education: A Literature Review

Authors: Abeer M. Orabi

Abstract:

Midwives are being seen as a key factor in returning birth care to a normal physiologic process that is woman-centered. On the other hand, more needs to be done to increase access for every woman to professional midwifery care. Because of the nature of the midwifery specialty, the magnitude of the effect that can result from a lack of knowledge if midwives make a mistake in their care has the potential to affect a large number of the birthing population. So, the development, running, and management of midwifery educational programs should follow international standards and come after a thorough community needs assessment. At the same time, the number of accredited midwifery educational programs needs to be increased so that larger numbers of midwives will be educated and qualified, as well as access to skilled midwifery care will be increased. Indeed, the selection of promising midwives is important for the successful completion of an educational program, achievement of the program goals, and retention of graduates in the field. Further, the number of schooled midwives in midwifery education programs, their background, and their experience constitute some concerns in the higher education industry. Basically, preceptors and clinical sites are major contributors to the midwifery education process, as educational programs rely on them to provide clinical practice opportunities. In this regard, the selection of clinical training sites should be based on certain criteria to ensure their readiness for the intended training experiences. After that, communication, collaboration, and liaison between teaching faculty and field staff should be maintained. However, the shortage of clinical preceptors and the massive reduction in the number of practicing midwives, in addition to unmanageable workloads, act as significant barriers to midwifery education. Moreover, the medicalized approach inherent in the hospital setting makes it difficult to practice the midwifery model of care, such as watchful waiting, non-interference in normal processes, and judicious use of interventions. Furthermore, creating a motivating study environment is crucial for avoiding unnecessary withdrawal and retention in any educational program. It is well understood that research is an essential component of any profession for achieving its optimal goal and providing a foundation and evidence for its practices, and midwifery is no exception. Midwives have been playing an important role in generating their own research. However, the selection of novel, researchable, and sustainable topics considering community health needs is also a challenge. In conclusion, ongoing education and research are the lifeblood of the midwifery profession to offer a highly competent and qualified workforce. However, many challenges are being faced, and barriers are hindering their improvement.

Keywords: barriers, challenges, midwifery education, educational programs

Procedia PDF Downloads 115
1106 Formation of Human Resources in the Light of Sustainable Development and the Achievement of Full Employment

Authors: Kaddour Fellague Mohammed

Abstract:

The world has seen in recent years, significant developments affected various aspects of life and influenced the different types of institutions, thus was born a new world is a world of globalization, which dominated the scientific revolution and the tremendous technological developments, and that contributed to the re-formation of human resources in contemporary organizations, and made patterns new regulatory and at the same time raised and strongly values and new ideas, the organizations have become more flexible, and faster response to consumer and environmental conditions, and exceeded the problem of time and place in the framework of communication and human interaction and use of advanced information technology and adoption mainly mechanism in running its operations , focused on performance and based strategic thinking and approach in order to achieve its strategic goals high degrees of superiority and excellence, this new reality created an increasing need for a new type of human resources, quality aims to renew and aspire to be a strategic player in managing the organization and drafting of various strategies, think globally and act locally, to accommodate local variables in the international markets, which began organizations tend to strongly as well as the ability to work under different cultures. Human resources management of the most important management functions to focus on the human element, which is considered the most valuable resource of the Department and the most influential in productivity at all, that the management and development of human resources Tattabra a cornerstone in the majority of organizations which aims to strengthen the organizational capacity, and enable companies to attract and rehabilitation of the necessary competencies and are able to keep up with current and future challenges, human resources can contribute to and strongly in achieving the objectives and profit organization, and even expand more than contribute to the creation of new jobs to alleviate unemployment and achieve full operation, administration and human resources mean short optimal use of the human element is available and expected, where he was the efficiency and capabilities, and experience of this human element, and his enthusiasm for the work stop the efficiency and success in reaching their goals, so interested administration scientists developed the principles and foundations that help to make the most of each individual benefit in the organization through human resources management, these foundations start of the planning and selection, training and incentives and evaluation, which is not separate from each other, but are integrated with each other as a system systemic order to reach the efficient functioning of the human resources management and has been the organization as a whole in the context of development sustainable.

Keywords: configuration, training, development, human resources, operating

Procedia PDF Downloads 432
1105 Effect of Steam Explosion of Crop Residues on Chemical Compositions and Efficient Energy Values

Authors: Xin Wu, Yongfeng Zhao, Qingxiang Meng

Abstract:

In China, quite low proportion of crop residues were used as feedstuff because of its poor palatability and low digestibility. Steam explosion is a physical and chemical feed processing technology which has great potential to improve sapidity and digestibility of crop residues. To investigate the effect of the steam explosion on chemical compositions and efficient energy values, crop residues (rice straw, wheat straw and maize stover) were processed by steam explosion (steam temperature 120-230°C, steam pressure 2-26kg/cm², 40min). Steam-exploded crop residues were regarded as treatment groups and untreated ones as control groups, nutritive compositions were analyzed and effective energy values were calculated by prediction model in INRA (1988, 2010) for both groups. Results indicated that the interaction between treatment and variety has a significant effect on chemical compositions of crop residues. Steam explosion treatment of crop residues decreased neutral detergent fiber (NDF) significantly (P < 0.01), and compared with untreated material, NDF content of rice straw, wheat straw, and maize stover lowered 21.46%, 32.11%, 28.34% respectively. Acid detergent lignin (ADL) of crop residues increased significantly after the steam explosion (P < 0.05). The content of crude protein (CP), ether extract (EE) and Ash increased significantly after steam explosion (P < 0.05). Moreover, predicted effective energy values of each steam-exploded residue were higher than that of untreated ones. The digestible energy (DE), metabolizable energy (ME), net energy for maintenance (NEm) and net energy for gain (NEg)of steam-exploded rice straw were 3.06, 2.48, 1.48and 0.29 MJ/kg respectively and increased 46.21%, 46.25%, 49.56% and 110.92% compared with untreated ones(P < 0.05). Correspondingly, the energy values of steam-exploded wheat straw were 2.18, 1.76, 1.03 and 0.15 MJ/kg, which were 261.78%, 261.29%, 274.59% and 1014.69% greater than that of wheat straw (P < 0.05). The above predicted energy values of steam exploded maize stover were 5.28, 4.30, 2.67 and 0.82 MJ/kg and raised 109.58%, 107.71%, 122.57% and 332.64% compared with the raw material(P < 0.05). In conclusion, steam explosion treatment could significantly decrease NDF content, increase ADL, CP, EE, Ash content and effective energy values of crop residues. The effect of steam explosion was much more obvious for wheat straw than the other two kinds of residues under the same condition.

Keywords: chemical compositions, crop residues, efficient energy values, steam explosion

Procedia PDF Downloads 250
1104 Effects of Sintering Temperature on Microstructure and Mechanical Properties of Nanostructured Ni-17Cr Alloy

Authors: B. J. Babalola, M. B. Shongwe

Abstract:

Spark Plasma Sintering technique is a novel processing method that produces limited grain growth and highly dense variety of materials; alloys, superalloys, and carbides just to mention a few. However, initial particle size and spark plasma sintering parameters are factors which influence the grain growth and mechanical properties of sintered materials. Ni-Cr alloys are regarded as the most promising alloys for aerospace turbine blades, owing to the fact that they meet the basic requirements of desirable mechanical strength at high temperatures and good resistance to oxidation. The conventional method of producing this alloy often results in excessive grain growth and porosity levels that are detrimental to its mechanical properties. The effect of sintering temperature was evaluated on the microstructure and mechanical properties of the nanostructured Ni-17Cr alloy. Nickel and chromium powder were milled using high energy ball milling independently for 30 hours, milling speed of 400 revs/min and ball to powder ratio (BPR) of 10:1. The milled powders were mixed in the composition of Nickel having 83 wt % and chromium, 17 wt %. This was sintered at varied temperatures from 800°C, 900°C, 1000°C, 1100°C and 1200°C. The structural characteristics such as porosity, grain size, fracture surface and hardness were analyzed by scan electron microscopy and X-ray diffraction, Archimedes densitometry, micro-hardness tester. The corresponding results indicated an increase in the densification and hardness property of the alloy as the temperature increases. The residual porosity of the alloy reduces with respect to the sintering temperature and in contrast, the grain size was enhanced. The study of the mechanical properties, including hardness, densification shows that optimum properties were obtained for the sintering temperature of 1100°C. The advantages of high sinterability of Ni-17Cr alloy using milled powders and microstructural details were discussed.

Keywords: densification, grain growth, milling, nanostructured materials, sintering temperature

Procedia PDF Downloads 402
1103 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 325
1102 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions

Authors: Yakubu Adamu

Abstract:

The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.

Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol

Procedia PDF Downloads 10
1101 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 69
1100 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 127