Search results for: small technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11739

Search results for: small technology

1539 The Library as a Metaphor: Perceptions, Evolution, and the Shifting Role in Society Through a Librarian's Lens

Authors: Nihar Kanta Patra, Akhtar Hussain

Abstract:

This comprehensive study, through the perspective of librarians, explores the library as a metaphor and its profound significance in representing knowledge and learning. It delves into how librarians perceive the library as a metaphor and the ways in which it symbolizes the acquisition, preservation, and dissemination of knowledge. The research investigates the most common metaphors used to describe libraries, as witnessed by librarians, and analyzes how these metaphors reflect the evolving role of libraries in society. Furthermore, the study examines how the library metaphor influences the perception of librarians regarding academic libraries as physical places and academic library websites as virtual spaces, exploring their potential for learning and exploration. It investigates the evolving nature of the library as a metaphor over time, as seen by librarians, considering the changing landscape of information and technology. The research explores the ways in which the library metaphor has expanded beyond its traditional representation, encompassing digital resources, online connectivity, and virtual realms, and provides insights into its potential evolution in the future. Drawing on the experiences of librarians in their interactions with library users, the study uncovers any specific cultural or generational differences in how people interpret or relate to the library as a metaphor. It sheds light on the diverse perspectives and interpretations of the metaphor based on cultural backgrounds, educational experiences, and technological familiarity. Lastly, the study investigates the evolving roles of libraries as observed by librarians and explores how these changing roles can influence the metaphors we use to represent them. It examines the dynamic nature of libraries as they adapt to societal needs, technological advancements, and new modes of information dissemination. By analyzing these various dimensions, this research provides a comprehensive understanding of the library as a metaphor through the lens of librarians, illuminating its significance, evolution, and its transformative impact on knowledge, learning, and the changing role of libraries in society.

Keywords: library, librarians, metaphor, perception

Procedia PDF Downloads 75
1538 The 10,000 Fold Effect of Retrograde Neurotransmission, a New Concept for Stroke Revival: Use of Intracarotid Sodium Nitroprusside

Authors: Vinod Kumar

Abstract:

Background: Tissue Plasminogen Activator (tPA) showed a level 1 benefit in acute stroke (within 3-6 hrs). Intracarotid sodium nitroprusside (ICSNP) has been studied in this context with a wide treatment window, fast recovery and affordability. This work proposes two mechanisms for acute cases and one mechanism for chronic cases, which are interrelated, for physiological recovery. a)Retrograde Neurotransmission (acute cases): 1)Normal excitatory impulse: at the synaptic level, glutamate activates NMDA receptors, with nitric oxide synthetase (NOS) on the postsynaptic membrane, for further propagation by the calcium-calmodulin complex. Nitric oxide (NO, produced by NOS) travels backward across the chemical synapse and binds the axon-terminal NO receptor/sGC of a presynaptic neuron, regulating anterograde neurotransmission (ANT) via retrograde neurotransmission (RNT). Heme is the ligand-binding site of the NO receptor/sGC. Heme exhibits > 10,000-fold higher affinity for NO than for oxygen (the 10,000-fold effect) and is completed in 20 msec. 2)Pathological conditions: normal synaptic activity, including both ANT and RNT, is absent. A NO donor (SNP) releases NO from NOS in the postsynaptic region. NO travels backward across a chemical synapse to bind to the heme of a NO receptor in the axon terminal of a presynaptic neuron, generating an impulse, as under normal conditions. b)Vasospasm: (acute cases) Perforators show vasospastic activity. NO vasodilates the perforators via the NO-cAMP pathway. c)Long-Term Potentıatıon (LTP): (chronic cases) The NO–cGMP-pathway plays a role in LTP at many synapses throughout the CNS and at the neuromuscular junction. LTP has been reviewed both generally and with respect to brain regions specific for memory/learning. Aims/Study Des’gn: The principles of “generation of impulses from the presynaptic region to the postsynaptic region by very potent RNT (10,000-fold effect)” and “vasodilation of arteriolar perforators” are the basis of the authors’ hypothesis to treat stroke cases. Case-control prospective study. Mater’als And Methods: The experimental population included 82 stroke patients (10 patients were given control treatments without superfusion or with 5% dextrose superfusion, and 72 patients comprised the ICSNP group). The mean time for superfusion was 9.5 days post-stroke. Pre- and post-ICSNP status was monitored by NIHSS, MRI and TCD. Results: After 90 seconds in the ICSNP group, the mean change in the NIHSS score was a decrease of 1.44 points, or 6.55%; after 2 h, there was a decrease of 1.16 points; after 24 h, there was an increase of 0.66 points, 2.25%, compared to the control-group increase of 0.7 points, or 3.53%; at 7 days, there was an 8.61-point decrease, 44.58%, compared to the control-group increase of 2.55 points, or 22.37%; at 2 months in ICSNP, there was a 6.94-points decrease, 62.80%, compared to the control-group decrease of 2.77 points, or 8.78%. TCD was documented and improvements were noted. Conclusions: ICSNP is a swift-acting drug in the treatment of stroke, acting within 90 seconds on day 9.5 post-stroke with a small decrease after 24 hours. The drug recovers from this decrease quickly.

Keywords: brain infarcts, intracarotid sodium nitroprusside, perforators, vasodilatıons, retrograde transmission, the 10, 000-fold effect

Procedia PDF Downloads 298
1537 Thermodynamic Phase Equilibria and Formation Kinetics of Cyclopentane, Cyclopentanone and Cyclopentanol Hydrates in the Presence of Gaseous Guest Molecules including Methane and Carbon Dioxide

Authors: Sujin Hong, Seokyoon Moon, Heejoong Kim, Yunseok Lee, Youngjune Park

Abstract:

Gas hydrate is an inclusion compound in which a low-molecular-weight gas or organic molecule is trapped inside a three-dimensional lattice structure created by water-molecule via intermolecular hydrogen bonding. It is generally formed at low temperature and high pressure, and exists as crystal structures of cubic systems − structure I, structure II, and hexagonal system − structure H. Many efforts have been made to apply them to various energy and environmental fields such as gas transportation and storage, CO₂ capture and separation, and desalination of seawater. Particularly, studies on the behavior of gas hydrates by new organic materials for CO₂ storage and various applications are underway. In this study, thermodynamic and spectroscopic analyses of the gas hydrate system were performed focusing on cyclopentanol, an organic molecule that forms gas hydrate at relatively low pressure. The thermodynamic equilibria of CH₄ and CO₂ hydrate systems including cyclopentanol were measured and spectroscopic analyses of XRD and Raman were performed. The differences in thermodynamic systems and formation kinetics of CO₂ added cyclopentane, cyclopentanol and cyclopentanone hydrate systems were compared. From the thermodynamic point of view, cyclopentanol was found to be a hydrate promotor. Spectroscopic analyses showed that cyclopentanol formed a hydrate crystal structure of cubic structure II in the presence of CH₄ and CO₂. It was found that the differences in the functional groups among the organic guest molecules significantly affected the rate of hydrate formation and the total amounts of CO₂ stored in the hydrate systems. The total amount of CO₂ stored in the cyclopentanone hydrate was found to be twice that of the amount of CO₂ stored in the cyclopentane and the cyclopentanol hydrates. The findings are expected to open up new opportunity to develop the gas hydrate based wastewater desalination technology.

Keywords: gas hydrate, CO₂, separation, desalination, formation kinetics, thermodynamic equilibria

Procedia PDF Downloads 251
1536 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study

Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu

Abstract:

Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.

Keywords: dysphagia, teleassessment, challenges, Indian SLP

Procedia PDF Downloads 119
1535 Breathing New Life into Old Media

Authors: Dennis Schmickle

Abstract:

Introductory statement: Augmented reality (AR) can be used to breathe life into traditional graphic design media, such as posters, book covers, and album art. AR superimposes a unique image/video on a user’s view of the real world, which makes it more immersive and realistic than traditional 2D media. This study developed a series of projects that utilize both traditional and AR media to teach the fundamental principles of graphic design. The results of this study suggest that AR can be an effective tool for teaching graphic design. Abstract: Traditional graphic design media, such as posters, book covers, and album art, could be considered to be “old media.” However, augmented reality (AR) can breathe life into these formats by making them more interactive and engaging for students and audiences alike. AR is a technology that superimposes a computer-generated image on a user’s view of the real world. This allows users to interact with digital content in a way that is more immersive and interactive than traditional 2D media. AR is becoming increasingly popular, as more and more people have access to smartphones and other devices that can support AR experiences. This study is comprised of a series of projects that utilize both traditional and AR media to teach the fundamental principles of graphic design. In these projects, students learn to create traditional design objects, such as posters, book covers, and album art. However, they are also required to create an animated version of their design and to use AR software to create an AR experience with which viewers can interact. The results of this study suggest that AR can be an effective and exciting tool for teaching graphic design. The students who participated in the study were able to learn the fundamental principles of graphic design, and they also developed the skills they need to create effective AR content. This study has implications for the future of graphic design education. As AR becomes more popular, it is likely that it will become an increasingly important tool for teaching graphic design.

Keywords: graphic design, augmented reality, print media, new media, AR, old media

Procedia PDF Downloads 54
1534 Producing and Mechanical Testing of Urea-Formaldehyde Resin Foams Reinforced by Waste Phosphogypsum

Authors: Krasimira Georgieva, Yordan Denev

Abstract:

Many of thermosetting resins have application only in filled state, reinforced with different mineral fillers. The co-filling of polymers with mineral filler and gases creates a possibility for production of polymer composites materials with low density. This processing leads to forming of new materials – gas-filled plastics (polymer foams). The properties of these materials are determined mainly by the shape and size of internal structural elements (pores). The interactions on the phase boundaries have influence on the materials properties too. In the present work, the gas-filled urea-formaldehyde resins were reinforced by waste phosphogypsum. The waste phosphogypsum (CaSO4.2H2O) is a solid by-product in wet phosphoric acid production processes. The values of the interactions polymer-filler were increased by using two modifying agents: polyvinyl acetate for polymer matrix and sodium metasilicate for filler. Technological methods for gas-filling and recipes of urea-formaldehyde based materials with apparent density 20-120 kg/m3 were developed. The heat conductivity of the samples is between 0.024 and 0.029 W/moK. Tensile analyses were carried out at 10 and 50% deformation and show values 0.01-0.14 MPa and 0.01-0.09 MPa, respectively. The apparent density of obtained materials is between 20 and 92 kg/m3. The changes in the tensile properties and density of these materials according to sodium metasilicate content were studied too. The mechanism of phosphogypsum adsorption modification was studied using methods of FT-IR spectroscopy. The structure of the gas-filled urea-formaldehyde resins was described by results of electron scanning microscopy at three different magnification ratios – x50, x150 and x 500. The aim of present work is to study the possibility of the usage of phosphogypsum as mineral filler for urea-formaldehyde resins and development of a technology for the production of gas-filled reinforced polymer composite materials. The structure and the properties of obtained composite materials are suitable for thermal and sound insulation applications.

Keywords: urea formaldehyde resins, gas-filled thermostes, phosphogypsum, mechanical properties

Procedia PDF Downloads 95
1533 Seasonal Assessment of Snow Cover Dynamics Based on Aerospace Multispectral Data on Livingston Island, South Shetland Islands in Antarctica and on Svalbard in Arctic

Authors: Temenuzhka Spasova, Nadya Yanakieva

Abstract:

Snow modulates the hydrological cycle and influences the functioning of ecosystems and is a significant resource for many populations whose water is harvested from cold regions. Snow observations are important for validating climate models. The accumulation and rapid melt of snow are two of the most dynamical seasonal environmental changes on the Earth’s surface. The actuality of this research is related to the modern tendencies of the remote sensing application in the solution of problems of different nature in the ecological monitoring of the environment. The subject of the study is the dynamic during the different seasons on Livingstone Island, South Shetland Islands in Antarctica and on Svalbard in Arctic. The objects were analyzed and mapped according to the Еuropean Space Agency data (ESA), acquired by sensors Sentinel-1 SAR (Synthetic Aperture Radar), Sentinel 2 MSI and GIS. Results have been obtained for changes in snow coverage during the summer-winter transition and its dynamics in the two hemispheres. The data used is of high time-spatial resolution, which is an advantage when looking at the snow cover. The MSI images are with different spatial resolution at the Earth surface range. The changes of the environmental objects are shown with the SAR images and different processing approaches. The results clearly show that snow and snow melting can be best registered by using SAR data via hh- horizontal polarization. The effect of the researcher on aerospace data and technology enables us to obtain different digital models, structuring and analyzing results excluding the subjective factor. Because of the large extent of terrestrial snow coverage and the difficulties in obtaining ground measurements over cold regions, remote sensing and GIS represent an important tool for studying snow areas and properties from regional to global scales.

Keywords: climate changes, GIS, remote sensing, SAR images, snow coverage

Procedia PDF Downloads 204
1532 River Catchment’s Demography and the Dynamics of Access to Clean Water in the Rural South Africa

Authors: Yiseyon Sunday Hosu, Motebang Dominic Vincent Nakin, Elphina N. Cishe

Abstract:

Universal access to clean and safe drinking water and basic sanitation is one of the targets of the 6th Sustainable Development Goals (SDGs). This paper explores the evidence-based indicators of Water Rights Acts (2013) among households in the rural communities in the Mthatha River catchment of OR Tambo District Municipality of South Africa. Daily access to minimum 25 litres/person and the factors influencing clean water access were investigated in the catchment. A total number of 420 households were surveyed in the upper, peri-urban, lower and coastal regions of Mthatha Rivier catchment. Descriptive and logistic regression analyses were conducted on the data collected from the households to elicit vital information on domestic water security among rural community dwellers. The results show that approximately 68 percent of total households surveyed have access to the required minimum 25 litre/person/day, with 66.3 percent in upper region, 76 per cent in the peri-urban, 1.1 percent in the lower and 2.3 percent in the coastal regions. Only 30 percent among the total surveyed households had access to piped water either in the house or public taps. The logistic regression showed that access to clean water was influenced by lack of water infrastructure, proximity to urban regions, daily flow of pipe-borne water, household size and distance to public taps. This paper recommends that viable integrated rural community-based water infrastructure provision strategies between NGOs and local authority and the promotion of point of use (POU) technologies to enhance better access to clean water.

Keywords: domestic water, household technology, water security, rural community

Procedia PDF Downloads 339
1531 The Advantages of Using DNA-Barcoding for Determining the Fraud in Seafood

Authors: Elif Tugce Aksun Tumerkan

Abstract:

Although seafood is an important part of human diet and categorized highly traded food industry internationally, it is remain overlooked generally in the global food security aspect. Food product authentication is the main interest in the aim of both avoids commercial fraud and to consider the risks that might be harmful to human health safety. In recent years, with increasing consumer demand for regarding food content and it's transparency, there are some instrumental analyses emerging for determining food fraud depend on some analytical methodologies such as proteomic and metabolomics. While, fish and seafood consumed as fresh previously, within advanced technology, processed or packaged seafood consumption have increased. After processing or packaging seafood, morphological identification is impossible when some of the external features have been removed. The main fish and seafood quality-related issues are the authentications of seafood contents such as mislabelling products which may be contaminated and replacement partly or completely, by lower quality or cheaper ones. For all mentioned reasons, truthful consistent and easily applicable analytical methods are needed for assurance the correct labelling and verifying of seafood products. DNA-barcoding methods become popular robust that used in taxonomic research for endangered or cryptic species in recent years; they are used for determining food traceability also. In this review, when comparing the other proteomic and metabolic analysis, DNA-based methods are allowing a chance to identification all type of food even as raw, spiced and processed products. This privilege caused by DNA is a comparatively stable molecule than protein and other molecules. Furthermore showing variations in sequence based on different species and founding in all organisms, make DNA-based analysis more preferable. This review was performed to clarify the main advantages of using DNA-barcoding for determining seafood fraud among other techniques.

Keywords: DNA-barcoding, genetic analysis, food fraud, mislabelling, packaged seafood

Procedia PDF Downloads 154
1530 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 149
1529 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 84
1528 The Impact of Agricultural Product Export on Income and Employment in Thai Economy

Authors: Anucha Wittayakorn-Puripunpinyoo

Abstract:

The research objectives were 1) to study the situation and its trend of agricultural product export of Thailand 2) to study the impact of agricultural product export on income of Thai economy 3) the impact of agricultural product export on employment of Thai economy and 4) to find out the recommendations of agricultural product export policy of Thailand. In this research, secondary data were collected as yearly time series data from 1990 to 2016 accounted for 27 years. Data were collected from the Bank of Thailand database. Primary data were collected from the steakholders of agricultural product export policy of Thailand. Data analysis was applied descriptive statistics such as arithmetic mean, standard deviation. The forecasting of agricultural product was applied Mote Carlo Simulation technique as well as time trend analysis. In addition, the impact of agricultural product export on income and employment by applying econometric model while the estimated parameters were utilized the ordinary least square technique. The research results revealed that 1) agricultural product export value of Thailand from 1990 to 2016 was 338,959.5 Million Thai baht with its growth rate of 4.984 percent yearly, in addition, the forecasting of agricultural product export value of Thailand has increased but its growth rate has been declined 2) the impact of agricultural product export has positive impact on income in Thai economy, increasing in agricultural product export of Thailand by 1 percent would lead income increased by 0.0051 percent 3) the impact of agricultural product export has positive impact on employment in Thai economy, increasing in agricultural product export of Thailand by 1 percent would lead income increased by 0.079 percent and 4) in the future, agricultural product export policy would focused on finished or semi-finished agricultural product instead of raw material by applying technology and innovation in to make value added of agricultural product export. The public agricultural product export policy would support exporters in private sector in order to encourage them as agricultural exporters in Thailand.

Keywords: agricultural product export, income, employment, Thai economy

Procedia PDF Downloads 291
1527 Qualitative Research on German Household Practices to Ease the Risk of Poverty

Authors: Marie Boost

Abstract:

Despite activation policies, forced personal initiative to step out of unemployment and a general prosper economic situation, poverty and financial hardship constitute a crucial role in the daily lives of many families in Germany. In 2015, ~16 million persons (20.2%) of the German population are at risk of poverty or social exclusion. This is illustrated by an unemployment rate of 13.3% in the research area, located in East Germany. Despite this high amount of persons living in vulnerable households, we know little about how they manage to stabilize their lives or even overcome poverty – apart from solely relying on welfare state benefits or entering in a stable, well-paid job. Most of them are struggling in precarious living circumstances, switching from one or several short-term, low-paid jobs into self-employment or unemployment, sometimes accompanied by welfare state benefits. Hence, insecurity and uncertain future expectation form a crucial part of their lives. Within the EU-funded project “RESCuE”, resilient practices of vulnerable households were investigated in nine European countries. Approximately, 15 expert interviews with policy makers, representatives from welfare state agencies, NGOs and charity organizations and 25 household interviews have been conducted within each country. It aims to find out more about the chances and conditions of social resilience. The research is based on the triangulation of biographical narrative interviews, followed by participatory photo interviews, asking the household members to portray their typical everyday life. The presentation is focusing on the explanatory strength of this mixed-methods approach in order to show the potential of household practices to overcome financial hardship. The methodological combination allows an in-depth analysis of the families and households everyday living circumstances, including their poverty and employment situation, whether formal and informal. Active household budgeting practices, such as saving and consumption practices are based on subsistence or Do-It-Yourself work. Especially due to the photo-interviews, the importance of inherent cultural and tacit knowledge becomes obvious as it pictures their typical practices, like cultivation and gathering fruits and vegetables or going fishing. One of the central findings is the multiple purposes of these practices. They contribute to ease financial burden through consumption reduction and strengthen social ties, as they are mostly conducted with close friends or family members. In general, non-commodified practices are found to be re-commodified and to contribute to ease financial hardship, e.g. by the use of commons, barter trade or simple mutual exchange (gift exchange). These practices can substitute external purchases and reduce expenses or even generate a small income. Mixing different income sources are found to be the most likely way out of poverty within the context of a precarious labor market. But these resilient household practices take its toll as they are highly preconditioned, and many persons put themselves into risk of overstressing themselves. Thus, the potentials and risks of resilient household practices are reflected in the presentation.

Keywords: consumption practices, labor market, qualitative research, resilience

Procedia PDF Downloads 211
1526 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images

Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor

Abstract:

Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.

Keywords: foot disorder, machine learning, neural network, pes planus

Procedia PDF Downloads 342
1525 Semantic Search Engine Based on Query Expansion with Google Ranking and Similarity Measures

Authors: Ahmad Shahin, Fadi Chakik, Walid Moudani

Abstract:

Our study is about elaborating a potential solution for a search engine that involves semantic technology to retrieve information and display it significantly. Semantic search engines are not used widely over the web as the majorities are still in Beta stage or under construction. Many problems face the current applications in semantic search, the major problem is to analyze and calculate the meaning of query in order to retrieve relevant information. Another problem is the ontology based index and its updates. Ranking results according to concept meaning and its relation with query is another challenge. In this paper, we are offering a light meta-engine (QESM) which uses Google search, and therefore Google’s index, with some adaptations to its returned results by adding multi-query expansion. The mission was to find a reliable ranking algorithm that involves semantics and uses concepts and meanings to rank results. At the beginning, the engine finds synonyms of each query term entered by the user based on a lexical database. Then, query expansion is applied to generate different semantically analogous sentences. These are generated randomly by combining the found synonyms and the original query terms. Our model suggests the use of semantic similarity measures between two sentences. Practically, we used this method to calculate semantic similarity between each query and the description of each page’s content generated by Google. The generated sentences are sent to Google engine one by one, and ranked again all together with the adapted ranking method (QESM). Finally, our system will place Google pages with higher similarities on the top of the results. We have conducted experimentations with 6 different queries. We have observed that most ranked results with QESM were altered with Google’s original generated pages. With our experimented queries, QESM generates frequently better accuracy than Google. In some worst cases, it behaves like Google.

Keywords: semantic search engine, Google indexing, query expansion, similarity measures

Procedia PDF Downloads 414
1524 Mobile Application Interventions in Positive Psychology: Current Status and Recommendations for Effective App Design

Authors: Gus Salazar, Jeremy Bekker, Lauren Linford, Jared Warren

Abstract:

Positive psychology practices allow for its principles to be applied to all people, regardless of their current level of functioning. To increase the dissemination of these practices, interventions are being adapted for use with digital technology, such as mobile apps. However, the research regarding positive psychology mobile app interventions is still in its infancy. In an effort to facilitate progress in this important area, we 1) conducted a qualitative review to summarize the current state of the positive psychology mobile app literature and 2) developed research-supported recommendations for positive psychology app development to maximize behavior change. In our literature review, we found that while positive psychology apps varied widely in content and purpose, there was a near-complete lack of research supporting their effectiveness. Most apps provided no rationale for the behavioral change techniques (BCTs) they employed in their app, and most did not develop their app with specific theoretical frameworks or design models in mind. Given this problem, we recommended four steps for effective positive psychology app design. First, developers must establish their app in a research-supported theory of change. Second, researchers must select appropriate behavioral change techniques which are consistent with their app’s goals. Third, researchers must leverage effective design principles. These steps will help mobile applications use data-driven methods for encouraging behavior change in their users. Lastly, we discuss directions for future research. In particular, researchers must investigate the effectiveness of various BCTs in positive psychology interventions. Although there is some research on this point, we do not yet clearly understand the mechanisms within the apps that lead to behavior change. Additionally, app developers must also provide data on the effectiveness of their mobile apps. As developers follow these steps for effective app development and as researchers continue to investigate what makes these apps most effective, we will provide millions of people in need with access to research-based mental health resources.

Keywords: behavioral change techniques, mobile app, mobile intervention, positive psychology

Procedia PDF Downloads 212
1523 Impact of Combined Heat and Power (CHP) Generation Technology on Distribution Network Development

Authors: Sreto Boljevic

Abstract:

In the absence of considerable investment in electricity generation, transmission and distribution network (DN) capacity, the demand for electrical energy will quickly strain the capacity of the existing electrical power network. With anticipated growth and proliferation of Electric vehicles (EVs) and Heat pump (HPs) identified the likelihood that the additional load from EV changing and the HPs operation will require capital investment in the DN. While an area-wide implementation of EVs and HPs will contribute to the decarbonization of the energy system, they represent new challenges for the existing low-voltage (LV) network. Distributed energy resources (DER), operating both as part of the DN and in the off-network mode, have been offered as a means to meet growing electricity demand while maintaining and ever-improving DN reliability, resiliency and power quality. DN planning has traditionally been done by forecasting future growth in demand and estimating peak load that the network should meet. However, new problems are arising. These problems are associated with a high degree of proliferation of EVs and HPs as load imposes on DN. In addition to that, the promotion of electricity generation from renewable energy sources (RES). High distributed generation (DG) penetration and a large increase in load proliferation at low-voltage DNs may have numerous impacts on DNs that create issues that include energy losses, voltage control, fault levels, reliability, resiliency and power quality. To mitigate negative impacts and at a same time enhance positive impacts regarding the new operational state of DN, CHP system integration can be seen as best action to postpone/reduce capital investment needed to facilitate promotion and maximize benefits of EVs, HPs and RES integration in low-voltage DN. The aim of this paper is to generate an algorithm by using an analytical approach. Algorithm implementation will provide a way for optimal placement of the CHP system in the DN in order to maximize the integration of RES and increase in proliferation of EVs and HPs.

Keywords: combined heat & power (CHP), distribution networks, EVs, HPs, RES

Procedia PDF Downloads 187
1522 Cooperative Agents to Prevent and Mitigate Distributed Denial of Service Attacks of Internet of Things Devices in Transportation Systems

Authors: Borhan Marzougui

Abstract:

Road and Transport Authority (RTA) is moving ahead with the implementation of the leader’s vision in exploring all avenues that may bring better security and safety services to the community. Smart transport means using smart technologies such as IoT (Internet of Things). This technology continues to affirm its important role in the context of Information and Transportation Systems. In fact, IoT is a network of Internet-connected objects able to collect and exchange different data using embedded sensors. With the growth of IoT, Distributed Denial of Service (DDoS) attacks is also growing exponentially. DDoS attacks are the major and a real threat to various transportation services. Currently, the defense mechanisms are mainly passive in nature, and there is a need to develop a smart technique to handle them. In fact, new IoT devices are being used into a botnet for DDoS attackers to accumulate for attacker purposes. The aim of this paper is to provide a relevant understanding of dangerous types of DDoS attack related to IoT and to provide valuable guidance for the future IoT security method. Our methodology is based on development of the distributed algorithm. This algorithm manipulates dedicated intelligent and cooperative agents to prevent and to mitigate DDOS attacks. The proposed technique ensure a preventive action when a malicious packets start to be distributed through the connected node (Network of IoT devices). In addition, the devices such as camera and radio frequency identification (RFID) are connected within the secured network, and the data generated by it are analyzed in real time by intelligent and cooperative agents. The proposed security system is based on a multi-agent system. The obtained result has shown a significant reduction of a number of infected devices and enhanced the capabilities of different security dispositives.

Keywords: IoT, DDoS, attacks, botnet, security, agents

Procedia PDF Downloads 131
1521 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications

Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford

Abstract:

While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.

Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer

Procedia PDF Downloads 46
1520 A Comprehensive Comparative Study on Seasonal Variation of Parameters Involved in Site Characterization and Site Response Analysis by Using Microtremor Data

Authors: Yehya Rasool, Mohit Agrawal

Abstract:

The site characterization and site response analysis are the crucial steps for reliable seismic microzonation of an area. So, the basic parameters involved in these fundamental steps are required to be chosen properly in order to efficiently characterize the vulnerable sites of the study region. In this study, efforts are made to delineate the variations in the physical parameter of the soil for the summer and monsoon seasons of the year (2021) by using Horizontal-to-Vertical Spectral Ratios (HVSRs) recorded at five sites of the Indian Institute of Technology (Indian School of Mines), Dhanbad, Jharkhand, India. The data recording at each site was done in such a way that less amount of anthropogenic noise was recorded at each site. The analysis has been done for five seismic parameters like predominant frequency, H/V ratio, the phase velocity of Rayleigh waves, shear wave velocity (Vs), compressional wave velocity (Vp), and Poisson’s ratio for both the seasons of the year. From the results, it is observed that these parameters majorly vary drastically for the upper layers of soil, which in turn may affect the amplification ratios and probability of exceedance obtained from seismic hazard studies. The HVSR peak comes out to be higher in monsoon, with a shift in predominant frequency as compared to the summer season of the year 2021. Also, the drastic reduction in shear wave velocity (up to ~10 m) of approximately 7%-15% is also perceived during the monsoon period with a slight decrease in compressional wave velocity. Generally, the increase in the Poisson ratios is found to have higher values during monsoon in comparison to the summer period. Our study may be very beneficial to various agricultural and geotechnical engineering projects.

Keywords: HVSR, shear wave velocity profile, Poisson ratio, microtremor data

Procedia PDF Downloads 75
1519 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution

Authors: Masomeh Jamshid Nejad

Abstract:

Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.

Keywords: statistics, excel-based instruction, data visualization, pedagogy

Procedia PDF Downloads 40
1518 Study of Evaluation Model Based on Information System Success Model and Flow Theory Using Web-scale Discovery System

Authors: June-Jei Kuo, Yi-Chuan Hsieh

Abstract:

Because of the rapid growth of information technology, more and more libraries introduce the new information retrieval systems to enhance the users’ experience, improve the retrieval efficiency, and increase the applicability of the library resources. Nevertheless, few of them are discussed the usability from the users’ aspect. The aims of this study are to understand that the scenario of the information retrieval system utilization, and to know why users are willing to continuously use the web-scale discovery system to improve the web-scale discovery system and promote their use of university libraries. Besides of questionnaires, observations and interviews, this study employs both Information System Success Model introduced by DeLone and McLean in 2003 and the flow theory to evaluate the system quality, information quality, service quality, use, user satisfaction, flow, and continuing to use web-scale discovery system of students from National Chung Hsing University. Then, the results are analyzed through descriptive statistics and structural equation modeling using AMOS. The results reveal that in web-scale discovery system, the user’s evaluation of system quality, information quality, and service quality is positively related to the use and satisfaction; however, the service quality only affects user satisfaction. User satisfaction and the flow show a significant impact on continuing to use. Moreover, user satisfaction has a significant impact on user flow. According to the results of this study, to maintain the stability of the information retrieval system, to improve the information content quality, and to enhance the relationship between subject librarians and students are recommended for the academic libraries. Meanwhile, to improve the system user interface, to minimize layer from system-level, to strengthen the data accuracy and relevance, to modify the sorting criteria of the data, and to support the auto-correct function are required for system provider. Finally, to establish better communication with librariana commended for all users.

Keywords: web-scale discovery system, discovery system, information system success model, flow theory, academic library

Procedia PDF Downloads 86
1517 A Review of Current Research and Future Directions on Foodborne Illness and Food Safety: Understanding the Risks and Mitigation Strategies

Authors: Tuji Jemal Ahmed

Abstract:

This paper is to provides a comprehensive review of current research works on foodborne illness and food safety, including the risks associated with foodborne illnesses, the latest research on food safety, and the mitigation strategies used to prevent and control foodborne illnesses. Foodborne illness is a major public health concern that affects millions of people every year. As foodborne illnesses have grown more common and dangerous in recent years, it is vital that we research and build upon methods to ensure food remains safe throughout consumption. Additionally, this paper will discuss future directions for food safety research, including emerging technologies, changes in regulations and standards, and collaborative efforts to improve food safety. The first section of the paper provides an overview of the risks of foodborne illness, including a definition of foodborne illness, the causes of foodborne illness, the types of foodborne illnesses, and high-risk foods for foodborne illness, Health Consequences of Foodborne Illness. The second section of the paper focuses on current research on food safety, including the role of regulatory agencies in food safety, food safety standards and guidelines, emerging food safety concerns, and advances in food safety technology. The third section of the paper explores mitigation strategies for foodborne illness, including preventative measures, hazard analysis and critical control points (HACCP), good manufacturing practices (GMPs), and training and education. Finally, this paper examines future directions for food safety research, including hurdle technologies and their impact on food safety, changes in food safety regulations and standards, collaborative efforts to improve food safety, and research gaps and areas for further exploration. In general, this work provides a comprehensive review of current research and future directions in food safety and understanding the risks associated with foodborne illness. The implications of the assessment for food safety and public health are discussed, as well as recommended for research scholars.

Keywords: food safety, foodborne illness, technologies, mitigation

Procedia PDF Downloads 82
1516 Social Network Roles in Organizations: Influencers, Bridges, and Soloists

Authors: Sofia Dokuka, Liz Lockhart, Alex Furman

Abstract:

Organizational hierarchy, traditionally composed of individual contributors, middle management, and executives, is enhanced by the understanding of informal social roles. These roles, identified with organizational network analysis (ONA), might have an important effect on organizational functioning. In this paper, we identify three social roles – influencers, bridges, and soloists, and provide empirical analysis based on real-world organizational networks. Influencers are employees with broad networks and whose contacts also have rich networks. Influence is calculated using PageRank, initially proposed for measuring website importance, but now applied in various network settings, including social networks. Influencers, having high PageRank, become key players in shaping opinions and behaviors within an organization. Bridges serve as links between loosely connected groups within the organization. Bridges are identified using betweenness and Burt’s constraint. Betweenness quantifies a node's control over information flows by evaluating its role in the control over the shortest paths within the network. Burt's constraint measures the extent of interconnection among an individual's contacts. A high constraint value suggests fewer structural holes and lesser control over information flows, whereas a low value suggests the contrary. Soloists are individuals with fewer than 5 stable social contacts, potentially facing challenges due to reduced social interaction and hypothetical lack of feedback and communication. We considered social roles in the analysis of real-world organizations (N=1,060). Based on data from digital traces (Slack, corporate email and calendar) we reconstructed an organizational communication network and identified influencers, bridges and soloists. We also collected employee engagement data through an online survey. Among the top-5% of influencers, 10% are members of the Executive Team. 56% of the Executive Team members are part of the top influencers group. The same proportion of top influencers (10%) is individual contributors, accounting for just 0.6% of all individual contributors in the company. The majority of influencers (80%) are at the middle management level. Out of all middle managers, 19% hold the role of influencers. However, individual contributors represent a small proportion of influencers, and having information about these individuals who hold influential roles can be crucial for management in identifying high-potential talents. Among the bridges, 4% are members of the Executive Team, 16% are individual contributors, and 80% are middle management. Predominantly middle management acts as a bridge. Bridge positions of some members of the executive team might indicate potential micromanagement on the leader's part. Recognizing the individuals serving as bridges in an organization uncovers potential communication problems. The majority of soloists are individual contributors (96%), and 4% of soloists are from middle management. These managers might face communication difficulties. We found an association between being an influencer and attitude toward a company's direction. There is a statistically significant 20% higher perception that the company is headed in the right direction among influencers compared to non-influencers (p < 0.05, Mann-Whitney test). Taken together, we demonstrate that considering social roles in the company might indicate both positive and negative aspects of organizational functioning that should be considered in data-driven decision-making.

Keywords: organizational network analysis, social roles, influencer, bridge, soloist

Procedia PDF Downloads 89
1515 Comparison of Fuel Properties from Species of Microalgae and Selected Second-Generation Oil Feedstocks

Authors: Andrew C. Eloka Eboka, Freddie L. Inambao

Abstract:

Comparative investigation and assessment of microalgal technology as a biodiesel production option was studied alongside other second generation feedstocks. This was carried out by comparing the fuel properties of species of Chlorella vulgaris, Duneliella spp, Synechococus spp and Senedesmus spp with the feedstock of Jatropha (ex-basirika variety), Hura crepitans, rubber and Natal mahogany seed oils. The micro-algae were cultivated in an open pond using a photobioreactor (New Brunsink set-up model BF-115 Bioflo/CelliGen made in the US) with operating parameters: 14L capacity, working volume of 7.5L media, including 10% inoculum, at optical density of 3.144 @540nm and light intensity of 200 lux, for 23 and 16 days respectively. Various produced/accumulated biomasses were harvested by draining, flocculation, centrifugation, drying and then subjected to lipid extraction processes. The oils extracted from the algae and feedstocks were characterised and used to produce biodiesel fuels, by the transesterification method, using modified optimization protocol. Fuel properties of the final biodiesel products were evaluated for chemo-physical and fuel properties. Results revealed Chlorella vulgaris as the best strain for biomass cultivation, having the highest lipid productivity (5.2mgL-1h-1), the highest rate of CO2 absorption (17.85mgL-1min-1) and the average carbon sequestration in the form of CO2 was 76.6%. The highest biomass productivity was 35.1mgL-1h-1 (Chlorella), while Senedesmus had the least output (3.75mgL-1h-1, 11.73mgL-1min-1). All species had good pH value adaptation, ranging from 6.5 to 8.5. The fuel properties of the micro-algal biodiesel in comparison with Jatropha, rubber, Hura and Natal mahogany were within ASTM specification and AGO used as the control. Fuel cultivation from microalgae is feasible and will revolutionise the biodiesel industry.

Keywords: biodiesel, fuel properties, microalgae, second generation, seed oils, feedstock, photo-bioreactor, open pond

Procedia PDF Downloads 352
1514 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy

Authors: Ozgu Hafizoglu

Abstract:

An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.

Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops

Procedia PDF Downloads 156
1513 Fault Prognostic and Prediction Based on the Importance Degree of Test Point

Authors: Junfeng Yan, Wenkui Hou

Abstract:

Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.

Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate

Procedia PDF Downloads 365
1512 A Case Study of Rainfall Derived Inflow/Infiltration in a Separate Sewer System in Gwangju, Korea

Authors: Bumjo Kim, Hyun Jin Kim, Joon Ha Kim

Abstract:

The separate sewer system is that collects the wastewater as a sewer pipe and rainfall as a stormwater pipe separately, and then sewage is treated in the wastewater treatment plant, the stormwater is discharged to rivers or lakes through stormwater drainage pipes. Unfortunately, even for separate sewer systems, it is not possible to prevent Rainfall Driven Inflow/Infiltration(RDII) completely to the sewer pipe. Even if the sewerage line is renovated, there is an ineluctable RDII due to the combined sewer system in the house or the difficulty of sewage maintenance in private areas. The basic statistical analysis was performed using environmental data including rainfall, sewage, water qualities and groundwater level in the strict of Gwangju in ​South Korea. During rainfall in the target area, RDII showed an increased rate of 13.4 ~ 53.0% compared to that of a clear day and showed a rapid hydrograph response of 0.3 ~ 3.0 hr. As a result of water quality analysis, BOD5 concentration decreased by 17.3 % and salinity concentration decreased by 8.8 % at the representative spot in the project area compared to the sunny day during rainfall. In contrast to the seasonal fluctuation range of 0.38 m ~ 0.55 m in groundwater in Gwangju area and 0.58 m ~ 0.78 m in monthly fluctuation range, while the difference between groundwater level and the depth of sewer pipe laying was 2.70 m on average, which is larger than the range of fluctuation. Comprehensively, it can be concluded that the increasing of flowrate at sewer line is due to not infiltration water caused by groundwater level rise, construction failure, cracking due to joint failure or conduit deterioration, rainfall was directly inflowed into the sewer line rapidly. Acknowledgements: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: ground water, rainfall, rainfall driven inflow/infiltration, separate sewer system

Procedia PDF Downloads 144
1511 Τhe Importance of Previous Examination Results, in Futural Differential Diagnostic Procedures and Especially in the Era of Covid-19

Authors: Angelis P. Barlampas

Abstract:

Purpose or Learning Objective It is well known that previous examinations play a major role in futural diagnosis, thus avoiding unnecessary new exams that cost in time and money both for the patient and the health system. A case is presented in which past patient’s results, in combination with the least needed new tests, give an easy final diagnosis. Methods or Background A middle aged man visited the emergency department complaining of hard controlled, persisting fever for the last few days. Laboratory tests showed an elevated number of white blood cells with neutrophil shift and abnormal CRP. The patient was admitted to hospital a month ago for continuing lungs symptomatology after a recent covid-19 infection. Results or Findings Computed tomography scanning showed a solid mass with spiculating margins in right lower lobe. After intravenous iodine contrast administration, there was mildly peripheral enhancement and eccentric non enhancing area. A pneumonic cancer was suspected. Comparison with the patient’s latest computed tomography revealed no mass in the area of interest but only signs of recent post covid-19 lung parenchyma abnormalities. Any new mass that appears in a month’s time span can not be a cancer but a benign lesion. It was obvious that an abscess was the most suitable explanation. The patient was admitted to hospital, and antibiotic therapy was given, with very good results. After a few days, the patient was afebrile and in good condition. Conclusion In this case , a PET scan or a biopsy was avoided, thanks to the patient’s medical history and the availability of previous examinations. It is worthy encouraging the patients to keep their medical records and organizing more efficiently the health system with the current technology of archiving the medical examinations, too.

Keywords: covid-19, chest ct, cancer, abscess, fever

Procedia PDF Downloads 48
1510 Mathematics Professional Development: Uptake and Impacts on Classroom Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. Included is a close-up examination of a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two US states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data were collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used. The full paper will include the case study of Ana to illustrate the factors involved in what teachers take up and use from participating in the LTG PD.

Keywords: geometry, mathematics professional development, pedagogical content knowledge, teacher learning

Procedia PDF Downloads 110