Search results for: media credibility components
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6874

Search results for: media credibility components

1144 Production of Rhamnolipids from Different Resources and Estimating the Kinetic Parameters for Bioreactor Design

Authors: Olfat A. Mohamed

Abstract:

Rhamnolipids biosurfactants have distinct properties given them importance in many industrial applications, especially their great new future applications in cosmetic and pharmaceutical industries. These applications have encouraged the search for diverse and renewable resources to control the cost of production. The experimental results were then applied to find a suitable mathematical model for obtaining the design criteria of the batch bioreactor. This research aims to produce Rhamnolipids from different oily wastewater sources such as petroleum crude oil (PO) and vegetable oil (VO) by using Pseudomonas aeruginosa ATCC 9027. Different concentrations of the PO and the VO are added to the media broth separately are in arrangement (0.5 1, 1.5, 2, 2.5 % v/v) and (2, 4, 6, 8 and 10%v/v). The effect of the initial concentration of oil residues and the addition of glycerol and palmitic acid was investigated as an inducer in the production of rhamnolipid and the surface tension of the broth. It was found that 2% of the waste (PO) and 6% of the waste (VO) was the best initial substrate concentration for the production of rhamnolipids (2.71, 5.01 g rhamnolipid/l) as arrangement. Addition of glycerol (10-20% v glycerol/v PO) to the 2% PO fermentation broth led to increase the rhamnolipid production (about 1.8-2 times fold). However, the addition of palmitic acid (5 and 10 g/l) to fermentation broth contained 6% VO rarely enhanced the production rate. The experimental data for 2% initially (PO) was used to estimate the various kinetic parameters. The following results were obtained, maximum rate or velocity of reaction (Vmax) = 0.06417 g/l.hr), yield of cell weight per unit weight of substrate utilized (Yx/s = 0.324 g Cx/g Cs) maximum specific growth rate (μmax = 0.05791 hr⁻¹), yield of rhamnolipid weight per unit weight of substrate utilized (Yp/s)=0.2571gCp/g Cs), maintenance coefficient (Ms =0.002419), Michaelis-Menten constant, (Km=6.1237 gmol/l), endogenous decay coefficient (Kd=0.002375 hr⁻¹). Predictive parameters and advanced mathematical models were applied to evaluate the time of the batch bioreactor. The results were as follows: 123.37, 129 and 139.3 hours in respect of microbial biomass, substrate and product concentration, respectively compared with experimental batch time of 120 hours in all cases. The expected mathematical models are compatible with the laboratory results and can, therefore, be considered as tools for expressing the actual system.

Keywords: batch bioreactor design, glycerol, kinetic parameters, petroleum crude oil, Pseudomonas aeruginosa, rhamnolipids biosurfactants, vegetable oil

Procedia PDF Downloads 121
1143 Effects of Brewer's Yeast Peptide Extract on the Growth of Probiotics and Gut Microbiota

Authors: Manuela Amorim, Cláudia S. Marques, Maria Conceição Calhau, Hélder J. Pinheiro, Maria Manuela Pintado

Abstract:

Recently it has been recognized peptides from different food sources with biological activities. However, no relevant study has proven the potential of brewer yeast peptides in the modulation of gut microbiota. The importance of human intestinal microbiota in maintaining host health is well known. Probiotics, prebiotics and the combination of these two components, can contribute to support an adequate balance of the bacterial population in the human large intestine. The survival of many bacterial species inhabiting the large bowel depends essentially on the substrates made available to them, most of which come directly from the diet. Some of these substrates can be selectively considered as prebiotics, which are food ingredients that can stimulate beneficial bacteria such as Lactobacilli or Bifidobacteria growth in the colon. Moreover, conventional food can be used as vehicle to intake bioactive compounds that provide those health benefits and increase people well-being. In this way, the main objective of this work was to study the potential prebiotic activity of brewer yeast peptide extract (BYP) obtained via hydrolysis of yeast proteins by cardosins present in Cynara cardunculus extract for possible use as a functional ingredient. To evaluate the effect of BYP on the modulation of gut microbiota in diet-induced obesity model, Wistar rats were fed either with a standard or a high-fat diet. Quantified via 16S ribosomal RNA (rRNA) expression by quantitative PCR (qPCR), genera of beneficial bacteria (Lactobacillus spp. and Bifidobacterium spp.) and three main phyla (Firmicutes, Bacteroidetes and Actinobacteria) were assessed. Results showed relative abundance of Lactobacillus spp., Bifidobacterium spp. and Bacteroidetes was significantly increased (P < 0.05) by BYP. Consequently, the potential health-promoting effects of WPE through modulation of gut microbiota were demonstrated in vivo. Altogether, these findings highlight the possible intervention of BYP as gut microbiota enhancer, promoting healthy life style, and the incorporation in new food products, leads them bringing associated benefits endorsing a new trend in the improvement of new value-added food products.

Keywords: functional ingredients, gut microbiota, prebiotics, brewer yeast peptide extract

Procedia PDF Downloads 486
1142 Seismic Response of Structure Using a Three Degree of Freedom Shake Table

Authors: Ketan N. Bajad, Manisha V. Waghmare

Abstract:

Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.

Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed

Procedia PDF Downloads 130
1141 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars

Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic

Abstract:

Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.

Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing

Procedia PDF Downloads 347
1140 Distributed Framework for Pothole Detection and Monitoring Using Federated Learning

Authors: Ezil Sam Leni, Shalen S.

Abstract:

Transport service monitoring and upkeep are essential components of smart city initiatives. The main risks to the relevant departments and authorities are the ever-increasing vehicular traffic and the conditions of the roads. In India, the economy is greatly impacted by the road transport sector. In 2021, the Ministry of Road Transport and Highways Transport, Government of India, produced a report with statistical data on traffic accidents. The data included the number of fatalities, injuries, and other pertinent criteria. This study proposes a distributed infrastructure for the monitoring, detection, and reporting of potholes to the appropriate authorities. In a distributed environment, the nodes are the edge devices, and local edge servers, and global servers. The edge devices receive the initial model to be employed from the global server. The YOLOv8 model for pothole detection is used in the edge devices. The edge devices run the pothole detection model, gather the pothole images on their path, and send the updates to the nearby edge server. The local edge server selects the clients for its aggregation process, aggregates the model updates and sends the updates to the global server. The global server collects the updates from the local edge servers, performs aggregation and derives the updated model. The updated model has the information about the potholes received from the local edge servers and notifies the updates to the local edge servers and concerned authorities for monitoring and maintenance of road conditions. The entire process is implemented in FedCV distributed environment with the implementation using the client-server model and aggregation entities. After choosing the clients for its aggregation process, the local edge server gathers the model updates and transmits them to the global server. After gathering the updates from the regional edge servers, the global server aggregates them and creates the updated model. Performance indicators and the experimentation environment are assessed, discussed, and presented. Accelerometer data may be taken into consideration for improved performance in the future development of this study, in addition to the images captured from the transportation routes.

Keywords: federated Learning, pothole detection, distributed framework, federated averaging

Procedia PDF Downloads 87
1139 The Foucaultian Relationship between Power and Knowledge: Genealogy as a Method for Epistemic Resistance

Authors: Jana Soler Libran

Abstract:

The primary aim of this paper is to analyze the relationship between power and knowledge suggested in Michel Foucault's theory. Taking into consideration the role of power in knowledge production, the goal is to evaluate to what extent genealogy can be presented as a practical method for epistemic resistance. To do so, the methodology used consists of a revision of Foucault’s literature concerning the topic discussed. In this sense, conceptual analysis is applied in order to understand the effect of the double dimension of power on knowledge production. In its negative dimension, power is conceived as an organ of repression, vetoing certain instances of knowledge considered deceitful. In opposition, in its positive dimension, power works as an organ of the production of truth by means of institutionalized discourses. This double declination of power leads to the first main findings of the present analysis: no truth or knowledge can lie outside power’s action, and power is constituted through accepted forms of knowledge. To second these statements, Foucaultian discourse formations are evaluated, presenting external exclusion procedures as paradigmatic practices to demonstrate how power creates and shapes the validity of certain epistemes. Thus, taking into consideration power’s mechanisms to produce and reproduce institutionalized truths, this paper accounts for the Foucaultian praxis of genealogy as a method to reveal power’s intention, instruments, and effects in the production of knowledge. In this sense, it is suggested to consider genealogy as a practice which, firstly, reveals what instances of knowledge are subjugated to power and, secondly, promotes aforementioned peripherical discourses as a form of epistemic resistance. In order to counterbalance these main theses, objections to Foucault’s work from Nancy Fraser, Linda Nicholson, Charles Taylor, Richard Rorty, Alvin Goldman, or Karen Barad are discussed. In essence, the understanding of the Foucaultian relationship between power and knowledge is essential to analyze how contemporary discourses are produced by both traditional institutions and new forms of institutionalized power, such as mass media or social networks. Therefore, Michel Foucault's practice of genealogy is relevant, not only for its philosophical contribution as a method to uncover the effects of power in knowledge production but also because it constitutes a valuable theoretical framework for political theory and sociological studies concerning the formation of societies and individuals in the contemporary world.

Keywords: epistemic resistance, Foucault’s genealogy, knowledge, power, truth

Procedia PDF Downloads 112
1138 The Relationship between Physical Fitness and Academic Performance among University Students

Authors: Bahar Ayberk

Abstract:

The study was conducted to determine the relationship between physical fitness and academic performance among university students. A far-famed saying ‘Sound mind in a sound body’ referring to the potential quality of increased physical fitness in the intellectual development of individuals seems to be endorsed. There is a growing body of literature the impact of physical fitness on academic achievement, especially in elementary and middle-school aged children. Even though there are numerous positive effects related to being physically active and physical fitness, their effect on academic achievement is not very much clear for university students. The subjects for this study included 25 students (20 female and 5 male) enrolled in Yeditepe University, Physiotherapy and Rehabilitation Department of Health Science Faculty. All participants filled in a questionnaire about their socio-demographic status, general health status, and physical activity status. Health-related physical fitness testing, included several core components: 1) body composition evaluation (body mass index, waist-to-hip ratio), 2) cardiovascular endurance evaluation (queen’s college step test), 3) muscle strength and endurance evaluation (sit-up test, push-up test), 4) flexibility evaluation (sit and reach test). Academic performance evaluation was based on student’s Cumulative Grade Point Average (CGPA). The prevalence of the subjects participating physical activity was found to be 40% (n = 10). CGPA scores were significantly higher among students having regular physical activity when we compared the students having regular physical activities or not (respectively 2,71 ± 0.46, 3.02 ± 0.28 scores, p = 0.076). The result of the study also revealed that there is positive correlation relationship between sit-up, push up and academic performance points (CGPA) (r = 0.43, p ≤ 0.05 ) and negative correlation relationship between cardiovascular endurance parameter (Queen's College Step Test) and academic performance points (CGPA) (r = -0.47, p ≤ 0.05). In conclusion, the findings confirmed that physical fitness level was generally associated with academic performance in the study group. Cardiovascular endurance and muscle strength and endurance were associated with student’s CGPA, whereas body composition and flexibility were unrelated to CGPA.

Keywords: academic performance, health-related physical fitness, physical activity, physical fitness testing

Procedia PDF Downloads 156
1137 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population

Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty

Abstract:

The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.

Keywords: atherosclerosis, coronary artery disease, indices, obesity

Procedia PDF Downloads 61
1136 An Analysis of the Recent Flood Scenario (2017) of the Southern Districts of the State of West Bengal, India

Authors: Soumita Banerjee

Abstract:

The State of West Bengal is mostly watered by innumerable rivers, and they are different in nature in both the northern and the southern part of the state. The southern part of West Bengal is mainly drained with the river Bhagirathi-Hooghly, and its major distributaries and tributaries have divided this major river basin into many subparts like the Ichamati-Bidyadhari, Pagla-Bansloi, Mayurakshi-Babla, Ajay, Damodar, Kangsabati Sub-basin to name a few. These rivers basically drain the Districts of Bankura, Burdwan, Hooghly, Nadia and Purulia, Birbhum, Midnapore, Murshidabad, North 24-Parganas, Kolkata, Howrah and South 24-Parganas. West Bengal has a huge number of flood-prone blocks in the southern part of the state of West Bengal, the responsible factors for flood situation are the shape and size of the catchment area, its steep gradient starting from plateau to flat terrain, the river bank erosion and its siltation, tidal condition especially in the lower Ganga Basin and very low maintenance of the embankments which are mostly used as communication links. Along with these factors, DVC (Damodar Valley Corporation) plays an important role in the generation (with the release of water) and controlling the flood situation. This year the whole Gangetic West Bengal is being flooded due to high intensity and long duration rainfall, and the release of water from the Durgapur Barrage As most of the rivers are interstate in nature at times floods also take place with release of water from the dams of the neighbouring states like Jharkhand. Other than Embankments, there is no such structural measures for combatting flood in West Bengal. This paper tries to analyse the reasons behind the flood situation this year especially with the help of climatic data collected from the Indian Metrological Department, flood related data from the Irrigation and Waterways Department, West Bengal and GPM (General Precipitation Measurement) data for rainfall analysis. Based on the threshold value derived from the calculation of the past available flood data, it is possible to predict the flood events which may occur in the near future and with the help of social media it can be spread out within a very short span of time to aware the mass. On a larger or a governmental scale, heightening the settlements situated on the either banks of the river can yield a better result than building up embankments.

Keywords: dam failure, embankments, flood, rainfall

Procedia PDF Downloads 215
1135 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies

Authors: Philipp Galkin

Abstract:

Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.

Keywords: China, energy policy, policy analysis, policy database

Procedia PDF Downloads 315
1134 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 152
1133 Assessment of OTA Contamination in Rice from Fungal Growth Alterations in a Scenario of Climate Changes

Authors: Carolina S. Monteiro, Eugénia Pinto, Miguel A. Faria, Sara C. Cunha

Abstract:

Rice (Oryza sativa) production plays a vital role in reducing hunger and poverty and assumes particular importance in low-income and developing countries. Rice is a sensitive plant, and production occurs strictly where suitable temperature and water conditions are found. Climatic changes are likely to affect worldwide, and some models have predicted increased temperatures, variations in atmospheric CO₂ concentrations and modification in precipitation patterns. Therefore, the ongoing climatic changes threaten rice production by increasing biotic and abiotic stress factors, and crops will grow in different environmental conditions in the following years. Around the world, the effects will be regional and can be detrimental or advantageous depending on the region. Mediterranean zones have been identified as possible hot spots, where dramatic temperature changes, modifications of CO₂ levels, and rainfall patterns are predicted. The actual estimated atmospheric CO₂ concentration is around 400 ppm, and it is predicted that it can reach up to 1000–1200 ppm, which can lead to a temperature increase of 2–4 °C. Alongside, rainfall patterns are also expected to change, with more extreme wet/dry episodes taking place. As a result, it could increase the migration of pathogens, and a shift in the occurrence of mycotoxins, concerning their types and concentrations, is expected. Mycotoxigenic spoilage fungi can colonize the crops and be present in all rice food chain supplies, especially Penicillium species, mainly resulting in ochratoxin A (OTA) contamination. In this scenario, the objectives of the present study are evaluating the effect of temperature (20 vs. 25 °C), CO₂ (400 vs. 1000 ppm), and water stress (0.93 vs 0.95 water activity) on growth and OTA production by a Penicillium nordicum strain in vitro on rice-based media and when colonizing layers of raw rice. Results demonstrate the effect of temperature, CO₂ and drought on the OTA production in a rice-based environment, thus contributing to the development of mycotoxins predictive models in climate change scenarios. As a result, improving mycotoxins' surveillance and monitoring systems, whose occurrence can be more frequent due to climatic changes, seems relevant and necessary. The development of prediction models for hazard contaminants presents in foods highly sensitive to climatic changes, such as mycotoxins, in the highly probable new agricultural scenarios is of paramount importance.

Keywords: climate changes, ochratoxin A, penicillium, rice

Procedia PDF Downloads 58
1132 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 171
1131 Co-Synthesis of Exopolysaccharides and Polyhydroxyalkanoates Using Waste Streams: Solid-State Fermentation as an Alternative Approach

Authors: Laura Mejias, Sandra Monteagudo, Oscar Martinez-Avila, Sergio Ponsa

Abstract:

Bioplastics are gaining attention as potential substitutes of conventional fossil-derived plastics and new components of specialized applications in different industries. Besides, these constitute a sustainable alternative since they are biodegradable and can be obtained starting from renewable sources. Thus, agro-industrial wastes appear as potential substrates for bioplastics production using microorganisms, considering they are a suitable source for nutrients, low-cost, and available worldwide. Therefore, this approach contributes to the biorefinery and circular economy paradigm. The present study assesses the solid-state fermentation (SSF) technology for the co-synthesis of exopolysaccharides (EPS) and polyhydroxyalkanoates (PHA), two attractive biodegradable bioplastics, using the leftover of the brewery industry brewer's spent grain (BSG). After an initial screening of diverse PHA-producer bacteria, it was found that Burkholderia cepacia presented the highest EPS and PHA production potential via SSF of BSG. Thus, B. cepacia served to identify the most relevant aspects affecting the EPS+PHA co-synthesis at a lab-scale (100g). Since these are growth-dependent processes, they were monitored online through oxygen consumption using a dynamic respirometric system, but also quantifying the biomass production (gravimetric) and the obtained products (EtOH precipitation for EPS and solid-liquid extraction coupled with GC-FID for PHA). Results showed that B. cepacia has grown up to 81 mg per gram of dry BSG (gDM) at 30°C after 96 h, representing up to 618 times higher than the other tested strains' findings. Hence, the crude EPS production was 53 mg g-1DM (2% carbohydrates), but purity reached 98% after a dialysis purification step. Simultaneously, B. cepacia accumulated up to 36% (dry basis) of the produced biomass as PHA, mainly composed of polyhydroxybutyrate (P3HB). The maximum PHA production was reached after 48 h with 12.1 mg g⁻¹DM, representing threefold the levels previously reported using SSF. Moisture content and aeration strategy resulted in the most significant variables affecting the simultaneous production. Results show the potential of co-synthesis via SSF as an attractive alternative to enhance bioprocess feasibility for obtaining these bioplastics in residue-based systems.

Keywords: bioplastics, brewer’s spent grain, circular economy, solid-state fermentation, waste to product

Procedia PDF Downloads 135
1130 Development of Biodegradable Wound Healing Patch of Curcumin

Authors: Abhay Asthana, Shally Toshkhani, Gyati Shilakari

Abstract:

The objective of the present research work is to develop a topical biodegradable dermal patch based formulation to aid accelerated wound healing. It is always better for patient compliance to be able to reduce the frequency of dressings with improved drug delivery and overall therapeutic efficacy. In present study optimized formulation using biodegradable components was obtained evaluating polymers and excipients (HPMC K4M, Ethylcellulose, Povidone, Polyethylene glycol and Gelatin) to impart significant folding endurance, elasticity, and strength. Molten gelatin was used to get a mixture using ethylene glycol. Chitosan dissolved in acidic medium was mixed with stirring to Gelatin mixture. With continued stirring to the mixture Curcumin was added with the aid of DCM and Methanol in an optimized ratio of 60:40 to get homogenous dispersion. Polymers were dispersed with stirring in the final formulation. The mixture was sonicated casted to get the film form. All steps were carried out under strict aseptic conditions. The final formulation was a thin uniformly smooth textured film with dark brown-yellow color. The film was found to have folding endurance was around 20 to 21 times without a crack in an optimized formulation at RT (23°C). The drug content was in range 96 to 102% and it passed the content uniform test. The final moisture content of the optimized formulation film was NMT 9.0%. The films passed stability study conducted at refrigerated conditions (4±0.2°C) and at room temperature (23 ± 2°C) for 30 days. Further, the drug content and texture remained undisturbed with stability study conducted at RT 23±2°C for 45 and 90 days. Percentage cumulative drug release was found to be 80% in 12h and matched the biodegradation rate as tested in vivo with correlation factor R2>0.9. In in vivo study administration of one dose in equivalent quantity per 2 days was applied topically. The data demonstrated a significant improvement with percentage wound contraction in contrast to control and plain drug respectively in given period. The film based formulation developed shows promising results in terms of stability and in vivo performance.

Keywords: wound healing, biodegradable, polymers, patch

Procedia PDF Downloads 467
1129 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 19
1128 An Impregnated Active Layer Mode of Solution Combustion Synthesis as a Tool for the Solution Combustion Mechanism Investigation

Authors: Zhanna Yermekova, Sergey Roslyakov

Abstract:

Solution combustion synthesis (SCS) is the unique method which multiple times has proved itself as an effective and efficient approach for the versatile synthesis of a variety of materials. It has significant advantages such as relatively simple handling process, high rates of product synthesis, mixing of the precursors on a molecular level, and fabrication of the nanoproducts as a result. Nowadays, an overwhelming majority of solution combustion investigations performed through the volume combustion synthesis (VCS) where the entire liquid precursor is heated until the combustion self-initiates throughout the volume. Less amount of the experiments devoted to the steady-state self-propagating mode of SCS. Under the beforementioned regime, the precursor solution is dried until the gel-like media, and later on, the gel substance is locally ignited. In such a case, a combustion wave propagates in a self-sustaining mode as in conventional solid combustion synthesis. Even less attention is given to the impregnated active layer (IAL) mode of solution combustion. An IAL approach to the synthesis is implying that the solution combustion of the precursors should be initiated on the surface of the third chemical or inside the third substance. This work is aiming to emphasize an underestimated role of the impregnated active layer mode of the solution combustion synthesis for the fundamental studies of the combustion mechanisms. It also serves the purpose of popularizing the technical terms and clarifying the difference between them. In order to do so, the solution combustion synthesis of γ-FeNi (PDF#47-1417) alloy has been accomplished within short (seconds) one-step reaction of metal precursors with hexamethylenetetramine (HTMA) fuel. An idea of the special role of the Ni in a process of alloy formation was suggested and confirmed with the particularly organized set of experiments. The first set of experiments were conducted in a conventional steady-state self-propagating mode of SCS. An alloy was synthesized as a single monophasic product. In two other experiments, the synthesis was divided into two independent processes which are possible under the IAL mode of solution combustion. The sequence of the process was changed according to the equations which are describing an Experiment A and B below: Experiment A: Step 1. Fe(NO₃)₃*9H₂O + HMTA = FeO + gas products; Step 2. FeO + Ni(NO₃)₂*6H₂O + HMTA = Ni + FeO + gas products; Experiment B: Step 1. Ni(NO₃)₂*6H₂O + HMTA = Ni + gas products; Step 2. Ni + Fe(NO₃)₃*9H₂O + HMTA = Fe₃Ni₂+ traces (Ni + FeO). Based on the IAL experiment results, one can see that combustion of the Fe(NO₃)₃9H₂O on the surface of the Ni is leading to the alloy formation while presence of the already formed FeO does not affect the Ni(NO₃)₂*6H₂O + HMTA reaction in any way and Ni is the main product of the synthesis.

Keywords: alloy, hexamethylenetetramine, impregnated active layer mode, mechanism, solution combustion synthesis

Procedia PDF Downloads 128
1127 The Application of Transcranial Direct Current Stimulation (tDCS) Combined with Traditional Physical Therapy to Address Upper Limb Function in Chronic Stroke: A Case Study

Authors: Najmeh Hoseini

Abstract:

Strokerecovery happens through neuroplasticity, which is highly influenced by the environment, including neuro-rehabilitation. Transcranial direct current stimulation (tDCS) may enhance recovery by modulating neuroplasticity. With tDCS, weak direct currents are applied noninvasively to modify excitability in the cortical areas under its electrodes. Combined with functional activities, this may facilitate motor recovery in neurologic disorders such as stroke. The purpose of this case study was to examine the effect of tDCS combined with 30 minutes of traditional physical therapy (PT)on arm function following a stroke. A 29-year-old male with chronic stroke involving the left middle cerebral artery territory went through the treatment protocol. Design The design included 5 weeks of treatment: 1 week of traditional PT, 2 weeks of sham tDCS combined with traditional PT, and 2 weeks of tDCS combined with traditional PT. PT included functional electrical stimulation (FES) of wrist extensors followed by task-specific functional training. Dual hemispheric tDCS with 1 mA intensity was applied on the sensorimotor cortices for the first 20 min of the treatment combined with FES. Assessments before and after each treatment block included Modified Ashworth Scale, ChedokeMcmaster Arm and Hand inventory, Action Research Arm Test (ARAT), and the Box and Blocks Test. Results showed reduced spasticity in elbow and wrist flexors only after tDCS combination weeks (+1 to 0). The patient demonstrated clinically meaningful improvements in gross motor and fine motor control over the duration of the study; however, components of the ARAT that require fine motor control improved the greatest during the experimental block. Average time improvement compared to baseline was26.29 s for tDCS combination weeks, 18.48 s for sham tDCS, and 6.83 for PT standard of care weeks. Combining dual hemispheric tDCS with the standard of care PT demonstrated improvements in hand dexterity greater than PT alone in this patient case.

Keywords: tDCS, stroke, case study, physical therapy

Procedia PDF Downloads 88
1126 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)

Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira

Abstract:

Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.

Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina

Procedia PDF Downloads 203
1125 Addressing Food Grain Losses in India: Energy Trade-Offs and Nutrition Synergies

Authors: Matthew F. Gibson, Narasimha D. Rao, Raphael B. Slade, Joana Portugal Pereira, Joeri Rogelj

Abstract:

Globally, India’s population is among the most severely impacted by nutrient deficiency, yet millions of tonnes of food are lost before reaching consumers. Across food groups, grains represent the largest share of daily calories and overall losses by mass in India. If current losses remain unresolved and follow projected population rates, we estimate, by 2030, losses from grains for human consumption could increase by 1.3-1.8 million tonnes (Mt) per year against current levels of ~10 Mt per year. This study quantifies energy input to minimise storage losses across India, responsible for a quarter of grain supply chain losses. In doing so, we identify and explore a Sustainable Development Goal (SDG) triplet between SDG₂, SDG₇, and SDG₁₂ and provide insight for development of joined up agriculture and health policy in the country. Analyzing rice, wheat, maize, bajra, and sorghum, we quantify one route to reduce losses in supply chains, by modelling the energy input to maintain favorable climatic conditions in modern silo storage. We quantify key nutrients (calories, protein, zinc, iron, vitamin A) contained within these losses and calculate roughly how much deficiency in these dietary components could be reduced if grain losses were eliminated. Our modelling indicates, with appropriate uncertainty, maize has the highest energy input intensity for storage, at 110 kWh per tonne of grain (kWh/t), and wheat the lowest (72 kWh/t). This energy trade-off represents 8%-16% of the energy input required in grain production. We estimate if grain losses across the supply chain were saved and targeted to India’s nutritionally deficient population, average protein deficiency could reduce by 46%, calorie by 27%, zinc by 26%, and iron by 11%. This study offers insight for development of Indian agriculture, food, and health policy by first quantifying and then presenting benefits and trade-offs of tackling food grain losses.

Keywords: energy, food loss, grain storage, hunger, India, sustainable development goal, SDG

Procedia PDF Downloads 121
1124 Re-Evaluating the Hegemony of English Language in West Africa: A Meta-Analysis Review of the Research, 2003-2018

Authors: Oris Tom-Lawyer, Michael Thomas

Abstract:

This paper seeks to analyse the hegemony of the English language in Western Africa through the lens of educational policies and the socio-economic functions of the language. It is based on the premise that there is a positive link between the English language and development contexts. The study aims to fill a gap in the research literature by examining the usefulness of hegemony as a concept to explain the role of English language in the region, thus countering the negative connotations that often accompany it. The study identified four main research questions: i. What are the socio-economic functions of English in Francophone/lusophone countries? ii. What factors promote the hegemony of English in anglophone countries? iii. To what extent is the hegemony of English in West Africa? iv. What are the implications of the non-hegemony of English in Western Africa? Based on a meta-analysis of the research literature between 2003 and 2018, the findings of the study revealed that in francophone/lusophone countries, English functions in the following socio-economic domains; they are peace keeping missions, regional organisations, commercial and industrial sectors, as an unofficial international language and as a foreign language. The factors that promote linguistic hegemony of English in anglophone countries are English as an official language, a medium of instruction, lingua franca, cultural language, language of politics, language of commerce, channel of development and English for media and entertainment. In addition, the extent of the hegemony of English in West Africa can be viewed from the factors that contribute to the non-hegemony of English in the region; they are French language, Portuguese language, the French culture, neo-colonialism, level of poverty, and economic ties of French to its former colonies. Finally, the implications of the non-hegemony of English language in West Africa are industrial backwardness, poverty rate, lack of social mobility, drop out of school rate, growing interest in English, access to limited internet information and lack of extensive career opportunities. The paper concludes that the hegemony of English has resulted in the development of anglophone countries in Western Africa, while in the francophone/lusophone regions of the continent, industrial backwardness and low literacy rates have been consequences of English language marginalisation. In conclusion, the paper makes several recommendations, including the need for the early introduction of English into French curricula as part of a potential solution.

Keywords: developmental tool, English language, linguistic hegemony, West Africa

Procedia PDF Downloads 134
1123 Changing Behaviour in the Digital Era: A Concrete Use Case from the Domain of Health

Authors: Francesca Spagnoli, Shenja van der Graaf, Pieter Ballon

Abstract:

Humans do not behave rationally. We are emotional, easily influenced by others, as well as by our context. The study of human behaviour became a supreme endeavour within many academic disciplines, including economics, sociology, and clinical and social psychology. Understanding what motivates humans and triggers them to perform certain activities, and what it takes to change their behaviour, is central both for researchers and companies, as well as policy makers to implement efficient public policies. While numerous theoretical approaches for diverse domains such as health, retail, environment have been developed, the methodological models guiding the evaluation of such research have reached for a long time their limits. Within this context, digitisation, the Information and communication technologies (ICT) and wearable, the Internet of Things (IoT) connecting networks of devices, and new possibilities to collect and analyse massive amounts of data made it possible to study behaviour from a realistic perspective, as never before. Digital technologies make it possible to (1) capture data in real-life settings, (2) regain control over data by capturing the context of behaviour, and (3) analyse huge set of information through continuous measurement. Within this complex context, this paper describes a new framework for initiating behavioural change, capitalising on the digital developments in applied research projects and applicable both to academia, enterprises and policy makers. By applying this model, behavioural research can be conducted to address the issues of different domains, such as mobility, environment, health or media. The Modular Behavioural Analysis Approach (MBAA) is here described and firstly validated through a concrete use case within the domain of health. The results gathered have proven that disclosing information about health in connection with the use of digital apps for health, can be a leverage for changing behaviour, but it is only a first component requiring further follow-up actions. To this end, a clear definition of different 'behavioural profiles', towards which addressing several typologies of interventions, it is essential to effectively enable behavioural change. In the refined version of the MBAA a strong focus will rely on defining a methodology for shaping 'behavioural profiles' and related interventions, as well as the evaluation of side-effects on the creation of new business models and sustainability plans.

Keywords: behavioural change, framework, health, nudging, sustainability

Procedia PDF Downloads 213
1122 Development of Antioxidant Rich Bakery Products by Applying Lysine and Maillard Reaction Products

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

Due to the rapidly growing number of conscious customers in the recent years, more and more people look for products with positive physiological effects which may contribute to the preservation of their health. In response to these demands Food Science Research Institute of Budapest develops and introduces into the market new functional foods of guaranteed positive effect that contain bioactive agents. New, efficient technologies are also elaborated in order to preserve the maximum biological effect of the produced foods. The main objective of our work was the development of new functional biscuits fortified with physiologically beneficial ingredients. Bakery products constitute the base of the food nutrients’ pyramid, thus they might be regarded as foodstuffs of the largest consumed quantity. In addition to the well-known and certified physiological benefits of lysine, as an essential amino acid, a series of antioxidant type compounds is formed as a consequence of the occurring Maillard-reaction. Progress of the evoked Maillard-reaction was studied by applying diverse sugars (glucose, fructose, saccharose, isosugar) and lysine at several temperatures (120-170°C). Interval of thermal treatment was also varied (10-30 min). The composition and production technologies were tailored in order to reach the maximum of the possible biological benefits, so as to the highest antioxidant capacity in the biscuits. Out of the examined sugar components, theextent of the Maillard-reaction-driven transformation of glucose was the most pronounced at both applied temperatures. For the precise assessment of the antioxidant activity of the products FRAP and DPPH methods were adapted and optimised. To acquire an authentic and extensive mechanism of the occurring transformations, Maillard-reaction products were identified, and relevant reaction pathways were revealed. GC-MS and HPLC-MS techniques were applied for the analysis of the 60 generated MRPs and characterisation of actual transformation processes. 3 plausible major transformation routes might have been suggested based on the analytical result and the deductive sequence of possible occurring conversions between lysine and the sugars.

Keywords: Maillard-reaction, lysine, antioxidant activity, GC-MS and HPLC-MS techniques

Procedia PDF Downloads 474
1121 Open Source Cloud Managed Enterprise WiFi

Authors: James Skon, Irina Beshentseva, Michelle Polak

Abstract:

Wifi solutions come in two major classes. Small Office/Home Office (SOHO) WiFi, characterized by inexpensive WiFi routers, with one or two service set identifiers (SSIDs), and a single shared passphrase. These access points provide no significant user management or monitoring, and no aggregation of monitoring and control for multiple routers. The other solution class is managed enterprise WiFi solutions, which involve expensive Access Points (APs), along with (also costly) local or cloud based management components. These solutions typically provide portal based login, per user virtual local area networks (VLANs), and sophisticated monitoring and control across a large group of APs. The cost for deploying and managing such managed enterprise solutions is typically about 10 fold that of inexpensive consumer APs. Low revenue organizations, such as schools, non-profits, non-government organizations (NGO's), small businesses, and even homes cannot easily afford quality enterprise WiFi solutions, though they may need to provide quality WiFi access to their population. Using available lower cost Wifi solutions can significantly reduce their ability to provide reliable, secure network access. This project explored and created a new approach for providing secured managed enterprise WiFi based on low cost hardware combined with both new and existing (but modified) open source software. The solution provides a cloud based management interface which allows organizations to aggregate the configuration and management of small, medium and large WiFi solutions. It utilizes a novel approach for user management, giving each user a unique passphrase. It provides unlimited SSID's across an unlimited number of WiFI zones, and the ability to place each user (and all their devices) on their own VLAN. With proper configuration it can even provide user local services. It also allows for users' usage and quality of service to be monitored, and for users to be added, enabled, and disabled at will. As inferred above, the ultimate goal is to free organizations with limited resources from the expense of a commercial enterprise WiFi, while providing them with most of the qualities of such a more expensive managed solution at a fraction of the cost.

Keywords: wifi, enterprise, cloud, managed

Procedia PDF Downloads 87
1120 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 149
1119 Navigating Disruption: Key Principles and Innovations in Modern Management for Organizational Success

Authors: Ahmad Haidar

Abstract:

This research paper investigates the concept of modern management, concentrating on the development of managerial practices and the adoption of innovative strategies in response to the fast-changing business landscape caused by Artificial Intelligence (AI). The study begins by examining the historical context of management theories, tracing the progression from classical to contemporary models, and identifying key drivers of change. Through a comprehensive review of existing literature and case studies, this paper provides valuable insights into the principles and practices of modern management, offering a roadmap for organizations aiming to navigate the complexities of the contemporary business world. The paper examines the growing role of digital technology in modern management, focusing on incorporating AI, machine learning, and data analytics to streamline operations and facilitate informed decision-making. Moreover, the research highlights the emergence of new principles, such as adaptability, flexibility, public participation, trust, transparency, and digital mindset, as crucial components of modern management. Also, the role of business leaders is investigated by studying contemporary leadership styles, such as transformational, situational, and servant leadership, emphasizing the significance of emotional intelligence, empathy, and collaboration in fostering a healthy organizational culture. Furthermore, the research delves into the crucial role of environmental sustainability, corporate social responsibility (CSR), and corporate digital responsibility (CDR). Organizations strive to balance economic growth with ethical considerations and long-term viability. The primary research question for this study is: "What are the key principles, practices, and innovations that define modern management, and how can organizations effectively implement these strategies to thrive in the rapidly changing business landscape?." The research contributes to a comprehensive understanding of modern management by examining its historical context, the impact of digital technologies, the importance of contemporary leadership styles, and the role of CSR and CDR in today's business landscape.

Keywords: modern management, digital technology, leadership styles, adaptability, innovation, corporate social responsibility, organizational success, corporate digital responsibility

Procedia PDF Downloads 59
1118 Analyzing the Perception of Social Networking Sites as a Learning Tool among University Students: Case Study of a Business School in India

Authors: Bhaskar Basu

Abstract:

Universities and higher education institutes are finding it increasingly difficult to engage students fruitfully through traditional pedagogic tools. Web 2.0 technologies comprising social networking sites (SNSs) offer a platform for students to collaborate and share information, thereby enhancing their learning experience. Despite the potential and reach of SNSs, its use has been limited in academic settings promoting higher education. The purpose of this paper is to assess the perception of social networking sites among business school students in India and analyze its role in enhancing quality of student experiences in a business school leading to the proposal of an agenda for future research. In this study, more than 300 students of a reputed business school were involved in a survey of their preferences of different social networking sites and their perceptions and attitudes towards these sites. A questionnaire with three major sections was designed, validated and distributed among  a sample of students, the research method being descriptive in nature. Crucial questions were addressed to the students concerning time commitment, reasons for usage, nature of interaction on these sites, and the propensity to share information leading to direct and indirect modes of learning. It was further supplemented with focus group discussion to analyze the findings. The paper notes the resistance in the adoption of new technology by a section of business school faculty, who are staunch supporters of the classical “face-to-face” instruction. In conclusion, social networking sites like Facebook and LinkedIn provide new avenues for students to express themselves and to interact with one another. Universities could take advantage of the new ways  in which students are communicating with one another. Although interactive educational options such as Moodle exist, social networking sites are rarely used for academic purposes. Using this medium opens new ways of academically-oriented interactions where faculty could discover more about students' interests, and students, in turn, might express and develop more intellectual facets of their lives. hitherto unknown intellectual facets.  This study also throws up the enormous potential of mobile phones as a tool for “blended learning” in business schools going forward.

Keywords: business school, India, learning, social media, social networking, university

Procedia PDF Downloads 255
1117 Optimizing Cell Culture Performance in an Ambr15 Microbioreactor Using Dynamic Flux Balance and Computational Fluid Dynamic Modelling

Authors: William Kelly, Sorelle Veigne, Xianhua Li, Zuyi Huang, Shyamsundar Subramanian, Eugene Schaefer

Abstract:

The ambr15™ bioreactor is a single-use microbioreactor for cell line development and process optimization. The ambr system offers fully automatic liquid handling with the possibility of fed-batch operation and automatic control of pH and oxygen delivery. With operating conditions for large scale biopharmaceutical production properly scaled down, micro bioreactors such as the ambr15™ can potentially be used to predict the effect of process changes such as modified media or different cell lines. In this study, gassing rates and dilution rates were varied for a semi-continuous cell culture system in the ambr15™ bioreactor. The corresponding changes to metabolite production and consumption, as well as cell growth rate and therapeutic protein production were measured. Conditions were identified in the ambr15™ bioreactor that produced metabolic shifts and specific metabolic and protein production rates also seen in the corresponding larger (5 liter) scale perfusion process. A Dynamic Flux Balance model was employed to understand and predict the metabolic changes observed. The DFB model-predicted trends observed experimentally, including lower specific glucose consumption when CO₂ was maintained at higher levels (i.e. 100 mm Hg) in the broth. A Computational Fluid Dynamic (CFD) model of the ambr15™ was also developed, to understand transfer of O₂ and CO₂ to the liquid. This CFD model predicted gas-liquid flow in the bioreactor using the ANSYS software. The two-phase flow equations were solved via an Eulerian method, with population balance equations tracking the size of the gas bubbles resulting from breakage and coalescence. Reasonable results were obtained in that the Carbon Dioxide mass transfer coefficient (kLa) and the air hold up increased with higher gas flow rate. Volume-averaged kLa values at 500 RPM increased as the gas flow rate was doubled and matched experimentally determined values. These results form a solid basis for optimizing the ambr15™, using both CFD and FBA modelling approaches together, for use in microscale simulations of larger scale cell culture processes.

Keywords: cell culture, computational fluid dynamics, dynamic flux balance analysis, microbioreactor

Procedia PDF Downloads 270
1116 Exchanges between Literature and Cinema: Scripted Writing in the Novel "Miguel e os Demônios", by Lourenço Mutarelli

Authors: Marilia Correa Parecis De Oliveira

Abstract:

This research looks at the novel Miguel e os demônios (2009), by the contemporary Brazilian author Lourenço Mutarelli. In it, the presence of film language resources is remarkable, creating thus a kind of scripted writing. We intend to analyze the presence of film language in work under study, in which there is a mixture of the characteristics of the novel and screenplay genres, trying to explore which aesthetic and meaning effects of the ownership of a visual language for the creation of a literary text create in the novel. The objective of this research is to identify and analyze the formal and thematic aspects that characterize the hybridity of literature and film in the novel by Lourenço Mutarelli. The method employed comprises reading and production cataloging of theoretical and critical texts, literary and film theory, historical review about the author, and also the realization of an analytical and interpretative reading of novel. In Miguel e os demônios there is a range of formal and thematic elements of popular narrative genres such as the detective story and action film, with a predominance of verb forms in the present and NPs - features that tend to make present the narrated scenes, as in the cinema. The novel, in this sense, is located in an intermediate position between the literary text and the pre-film text, as though filled with proper elements of the language of film, you can not fit it categorically in the genre script, since it does not reduce the script because aspires to be read as a novel. Therefore, the difficulty of fitting the work in a single gender also refused to be extra-textual factors - such as your publication as novel - but, rather, by the binary classifications serve solely to imprison the work on a label, which impoverish not only reading the text, as also the possibility of recognizing literature as a constant dialogue space and interaction with other media. We can say, therefore, that frame the work Miguel e os demônios in one of the two genres (novel or screenplay) proves not enough, since the text is revealed a hybrid narrative, consisting in a kind of scripted writing. In this sense, it is like a text that is born in a society saturated by audiovisual in their daily lives in order to be consumed by readers who, in ascending scale, exchange books by visual narratives. However, the novel uses film's resources without giving up its constitution as literature; on the contrary, it enriches the visual and linguistically, dialoguing with the complex contemporary horizon marked by the cultural industry.

Keywords: Brazilian literature, cinema, Lourenço Mutarelli, screenplay

Procedia PDF Downloads 299
1115 Effect of Women`s Autonomy on Unmet Need for Contraception and Family Size in India

Authors: Anshita Sharma

Abstract:

India is one of the countries to initiate family planning with intention to control the growing population by reducing fertility. In effort to this, India had introduced the National family planning programme in 1952. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services as in NFHS-1 the unmet need for limiting, spacing and total was 46 percent, 14 percent & 9 percent, respectively. The demand for spacing has reduced to at 8 percent, 8 percent for limiting and total unmet need was 16 percent in NFHS-2. The total unmet need has reduced to 13 percent in NFHS-3 for all currently married women and the demand for limiting and spacing is 7 percent and 6 percent respectively. The level of unmet need in India shows a reducing trend with increasing effectiveness of family planning services. Despite the progress, there is chunk of women who are deprived of controlling unintended and unwanted pregnancies. The present paper examines the socio-cultural and economic and demographic correlates of unmet need for contraception in India. It also examines the effect of women’s autonomy and unmet need for contraception on family size among different socio-economic groups of population. It uses data from national family health survey-3 carried out in 2005-06 and employs bi-variate techniques and multivariate techniques for analysis. The multiple regression analysis has done to seek the level and direction of relationship among various socio-economic and demographic factors. The result reveals that women with higher level of education and economic status have low level of unmet need for family planning. Women living in non-nuclear family have high unmet need for spacing and women living in nuclear family have high unmet need for limiting and family size is slightly higher of women of nuclear family. In India, the level of autonomy varies at different life point; usually women with higher age enjoy higher autonomy than their junior female member in the family. The finding shows that women with higher autonomy have large family size counter to women with low autonomy have low family size. Unmet need for family planning decrease with women’s increasing exposure to mass- media. The demographic factors like experience of child loss are directly related to family size. Women who experience higher child loss have low unmet need for spacing and limiting. Thus, It is established with the help that women’s autonomy status play substantial role in fulfilling demand of contraception for limiting and spacing which affect the family size.

Keywords: family size, socio-economic correlates, unmet need for limiting, unmet need for spacing, women`s autonomy

Procedia PDF Downloads 262