Search results for: three dimensional emission spectral studies
8601 Usability Guidelines for Arab E-Government Websites
Authors: Omyma Alosaimi, Asma Alsumait
Abstract:
The website developer and designer should follow usability guidelines to provide a user-friendly interface. Many guidelines and heuristics have been developed by previous studies to help both the developer and designer in this task, but E-government websites are special cases that require specialized guidelines. This paper introduces a set of eighteen guidelines for evaluating the usability of e-government websites in general and Arabic e-government websites specifically, along with a check list of how to apply them. The validity and effectiveness of these guidelines were evaluated against a variety of user characteristics. The results indicated that the proposed set of guidelines can be used to identify qualitative similarities and differences with user testing and that the new set is best suited for evaluating general and e-governmental usability.Keywords: e-government, human computer interaction, usability evaluation, usability guidelines
Procedia PDF Downloads 3958600 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 2968599 Equilibrium, Kinetic and Thermodynamic Studies of the Biosorption of Textile Dye (Yellow Bemacid) onto Brahea edulis
Authors: G. Henini, Y. Laidani, F. Souahi, A. Labbaci, S. Hanini
Abstract:
Environmental contamination is a major problem being faced by the society today. Industrial, agricultural, and domestic wastes, due to the rapid development in the technology, are discharged in the several receivers. Generally, this discharge is directed to the nearest water sources such as rivers, lakes, and seas. While the rates of development and waste production are not likely to diminish, efforts to control and dispose of wastes are appropriately rising. Wastewaters from textile industries represent a serious problem all over the world. They contain different types of synthetic dyes which are known to be a major source of environmental pollution in terms of both the volume of dye discharged and the effluent composition. From an environmental point of view, the removal of synthetic dyes is of great concern. Among several chemical and physical methods, adsorption is a promising technique due to the ease of use and low cost compared to other applications in the process of discoloration, especially if the adsorbent is inexpensive and readily available. The focus of the present study was to assess the potentiality of Brahea edulis (BE) for the removal of synthetic dye Yellow bemacid (YB) from aqueous solutions. The results obtained here may transfer to other dyes with a similar chemical structure. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, initial dye concentration, and temperature. The biosorption kinetic data of the material (BE) was tested by the pseudo first-order and the pseudo-second-order kinetic models. Thermodynamic parameters including the Gibbs free energy ΔG, enthalpy ΔH, and entropy ΔS have revealed that the adsorption of YB on the BE is feasible, spontaneous, and endothermic. The equilibrium data were analyzed by using Langmuir, Freundlich, Elovich, and Temkin isotherm models. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g: 12 mg/g; 1.5 g: 47.44 mg/g). The maximum biosorption occurred at around pH value of 2 for the YB. The equilibrium uptake was increased with an increase in the initial dye concentration in solution (Co = 120 mg/l; q = 35.97 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficient (R2 > 0.998) and a maximum monolayer adsorption capacity of 35.97 mg/g for YB.Keywords: adsorption, Brahea edulis, isotherm, yellow Bemacid
Procedia PDF Downloads 1778598 Economic Analysis of Endogenous Growth Model with ICT Capital
Authors: Shoji Katagiri, Hugang Han
Abstract:
This paper clarifies the role of ICT capital in Economic Growth. Albeit ICT remarkably contributes to economic growth, there are few studies on ICT capital in ICT sector from theoretical point of view. In this paper, production function of ICT which is used as input of intermediate good in final good and ICT sectors is incorporated into our model. In this setting, we analyze the role of ICT on balance growth path and show the possibility of general equilibrium solutions for this model. Through the simulation of the equilibrium solutions, we find that when ICT impacts on economy and economic growth increases, it is necessary that increases of efficiency at ICT sector and of accumulation of non-ICT and ICT capitals occur simultaneously.Keywords: endogenous economic growth, ICT, intensity, capital accumulation
Procedia PDF Downloads 4558597 MB-Slam: A Slam Framework for Construction Monitoring
Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han
Abstract:
Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.Keywords: perspective alignment, progress monitoring, slam, stereo matching.
Procedia PDF Downloads 2258596 Yield, Economics and ICBR of Different IPM Modules in Bt Cotton in Maharashtra
Authors: N. K. Bhute, B. B. Bhosle, D. G. More, B. V. Bhede
Abstract:
The field experiments were conducted during kharif season of the year 2007-08 at the experimental farm of the Department of Agricultural Entomology, Vasantrao Naik Marathwada Krishi Vidyapeeth, Studies on evaluation of different IPM modules for Bt cotton in relation to yield economics and ICBR revealed that MAU and CICR IPM modules proved superior. It was, however, on par with chemical control. Considering the ICBR and safety to natural enemies, an inference can be drawn that Bt cotton with IPM module is the most ideal combination. Besides reduction in insecticide use, it is also expected to ensure favourable ecological and economic returns in contrast to the adverse effects due to conventional insecticides. The IPM approach, which takes care of varying pest situation, appears to be essential for gaining higher advantage from Bt cotton.Keywords: yield, economics, ICBR, IPM Modules, Bt cotton
Procedia PDF Downloads 2688595 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads
Authors: Raja Umer Sajjad, Chang Hee Lee
Abstract:
Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters
Procedia PDF Downloads 2408594 Structural Health Assessment of a Masonry Bridge Using Wireless
Authors: Nalluri Lakshmi Ramu, C. Venkat Nihit, Narayana Kumar, Dillep
Abstract:
Masonry bridges are the iconic heritage transportation infrastructure throughout the world. Continuous increase in traffic loads and speed have kept engineers in dilemma about their structural performance and capacity. Henceforth, research community has an urgent need to propose an effective methodology and validate on real-time bridges. The presented research aims to assess the structural health of an Eighty-year-old masonry railway bridge in India using wireless accelerometer sensors. The bridge consists of 44 spans with length of 24.2 m each and individual pier is 13 m tall laid on well foundation. To calculate the dynamic characteristic properties of the bridge, ambient vibrations were recorded from the moving traffic at various speeds and the same are compared with the developed three-dimensional numerical model using finite element-based software. The conclusions about the weaker or deteriorated piers are drawn from the comparison of frequencies obtained from the experimental tests conducted on alternative spans. Masonry is a heterogeneous anisotropic material made up of incoherent materials (such as bricks, stones, and blocks). It is most likely the earliest largely used construction material. Masonry bridges, which were typically constructed of brick and stone, are still a key feature of the world's highway and railway networks. There are 1,47,523 railway bridges across India and about 15% of these bridges are built by masonry, which are around 80 to 100 year old. The cultural significance of masonry bridges cannot be overstated. These bridges are considered to be complicated due to the presence of arches, spandrel walls, piers, foundations, and soils. Due to traffic loads and vibrations, wind, rain, frost attack, high/low temperature cycles, moisture, earthquakes, river overflows, floods, scour, and soil under their foundations may cause material deterioration, opening of joints and ring separation in arch barrels, cracks in piers, loss of brick-stones and mortar joints, distortion of the arch profile. Few NDT tests like Flat jack Tests are being employed to access the homogeneity, durability of masonry structure, however there are many drawbacks because of the test. A modern approach of structural health assessment of masonry structures by vibration analysis, frequencies and stiffness properties is being explored in this paper.Keywords: masonry bridges, condition assessment, wireless sensors, numerical analysis modal frequencies
Procedia PDF Downloads 1698593 Music and Movies: Story about a Suicide
Authors: Karen V. Lee
Abstract:
The background and significance of this study involves an autoethnographic story that shares research results about how music and movies influence the suicide of a new music teacher working in a public school. The performative narrative duet demonstrates how music and movies highlight social issues when the new teacher cannot cope with allegations surrounding professional issues. Both university advisors are drawn into deep reflection about the wider political issues that arise around the transition from the student-teacher internship process to the teaching career with the stark reality of teaching profession in the 21st century. This performance of story and music creates a transformative composition of reading, hearing, feeling while provoking visceral and emotional responses. Sometimes, young teachers are forced to take a leave of absence to reflect upon their practice with adolescents. In this extreme circumstance, the outcome was suicide. The qualitative research method involves an autoethnographic story as the author is methodologist, theoretician, and participant. Sub-themes surround film, music education and how movie resources have influenced his tragic misguided decision regarding social, emotional, physical, spiritual, and practical strategies to cope with the allegations. Major findings from this story demonstrate how lived experiences can resonate the importance of providing more education and resources to new teachers. The research provides substantive contribution, aesthetic merit, as the impact of movies and music influences the suicide. The reflexive account of storied sensory experiences situated in culture settings becomes a way to describe and seek verisimilitude by evoking lifelike and believable feelings from others. Sadly, the circumstance surrounding the story involving the allegations of a teacher sexually harassing a student is not uncommon in society. However, the young teacher never received counseling to cope with the allegations but instead was influenced by music and movies and opted for suicide. In conclusion, stories share the implications for film and media studies as music and movies can encourage a moral mission to empower individuals with despair and emotional impairment to embrace professional support to assist with emotional and legal challenges encountered in the field of teaching. It is from media studies that education and awareness surrounding suicide can disseminate information about the tragic outcome.Keywords: music, movies, suicide, narrative, autoethnography
Procedia PDF Downloads 2308592 Leadership Styles and Adoption of Risk Governance in Insurance and Energy Industry: A Comparative Case Study
Authors: Ruchi Agarwal
Abstract:
In today’s world, companies are operating in dynamic, uncertain and ambiguous business environments. Globally, more companies are failing due to Environmental, Social and Governance (ESG) factors than ever. Corporate governance and risk management are intertwined in nature. For decades, corporate governance and risk management have been influenced by internal and external factors. Three schools of thought have influenced risk governance for decades: Agency theory, Contingency theory, and Institutional theory. Agency theory argues that agents have interests conflicting with principal interests and the information problem. Contingency theory suggests that risk management adoption is influenced by internal and external factors, while Institutional theory suggests that organizations legitimize risk management with regulators, competitors, and professional bodies. The conflicting objectives of theories have created problems for executives in organizations in the adoption of Risk Governance. So far, there are many studies that discussed risk culture and the role of actors in risk governance, but there are rare studies discussing the role of risk culture in the adoption of risk governance from a leadership style perspective. This study explores the adoption of risk governance in two contrasting industries, such as the Insurance and energy business, to understand whether risk governance is influenced by internal/external factors or whether risk culture is influenced by leaders. We draw empirical evidence by comparing the cases of an Indian insurance company and a renewable energy-based firm in India. We interviewed more than 20 senior executives of companies and collected annual reports, risk management policies, and more than 10 PPTs and other reports from 2017 to 2024. We visited the company for follow-up questions several times. The findings of my research revealed that both companies have used risk governance for strategic renewal of the company. Insurance companies use a transactional leadership style based on performance and reward for improving risk, while energy companies use rather symbolic management to make debt restructuring meaningful for stakeholders. Overall, both companies turned from loss-making to profitable ones in a few years. This comparative study highlights the role of different leadership styles in the adoption of risk governance. The study is also distinct as previous research rarely studied risk governance in two contrasting industries in reference to leadership styles.Keywords: leadership style, corporate governance, risk management, risk culture, strategic renewal
Procedia PDF Downloads 488591 Biophilic Design Strategies: Four Case-Studies from Northern Europe
Authors: Carmen García Sánchez
Abstract:
The UN's 17 Sustainable Development Goals – specifically the nº 3 and nº 11- urgently call for new architectural design solutions at different design scales to increase human contact with nature in the health and wellbeing promotion of primarily urban communities. The discipline of Interior Design offers an important alternative to large-scale nature-inclusive actions which are not always possible due to space limitations. These circumstances provide an immense opportunity to integrate biophilic design, a complex emerging and under-developed approach that pursues sustainable design strategies for increasing the human-nature connection through the experience of the built environment. Biophilic design explores the diverse ways humans are inherently inclined to affiliate with nature, attach meaning to and derive benefit from the natural world. It represents a biological understanding of architecture which categorization is still in progress. The internationally renowned Danish domestic architecture built in the 1950´s and early 1960´s - a golden age of Danish modern architecture - left a leading legacy that has greatly influenced the domestic sphere and has further led the world in terms of good design and welfare. This study examines how four existing post-war domestic buildings establish a dialogue with nature and her variations over time. The case-studies unveil both memorable and unique biophilic resources through sophisticated and original design expressions, where transformative processes connect the users to the natural setting and reflect fundamental ways in which they attach meaning to the place. In addition, fascinating analogies in terms of this nature interaction with particular traditional Japanese architecture inform the research. They embody prevailing lessons for our time today. The research methodology is based on a thorough literature review combined with a phenomenological analysis into how these case-studies contribute to the connection between humans and nature, after conducting fieldwork throughout varying seasons to document understanding in nature transformations multi-sensory perception (via sight, touch, sound, smell, time and movement) as a core research strategy. The cases´ most outstanding features have been studied attending the following key parameters: 1. Space: 1.1. Relationships (itineraries); 1.2. Measures/scale; 2. Context: Context: Landscape reading in different weather/seasonal conditions; 3. Tectonic: 3.1. Constructive joints, elements assembly; 3.2. Structural order; 4. Materiality: 4.1. Finishes, 4.2. Colors; 4.3. Tactile qualities; 5. Daylight interplay. Departing from an artistic-scientific exploration this groundbreaking study provides sustainable practical design strategies, perspectives, and inspiration to boost humans´ contact with nature through the experience of the interior built environment. Some strategies are associated with access to outdoor space or require ample space, while others can thrive in a dense urban context without direct access to the natural environment. The objective is not only to produce knowledge, but to phase in biophilic design in the built environment, expanding its theory and practice into a new dimension. Its long-term vision is to efficiently enhance the health and well-being of urban communities through daily interaction with Nature.Keywords: sustainability, biophilic design, architectural design, interior design, nature, Danish architecture, Japanese architecture
Procedia PDF Downloads 1008590 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip
Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar
Abstract:
Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation
Procedia PDF Downloads 1868589 Climate Variations and Fishers
Authors: S. Surapa Raju
Abstract:
In Andhra Pradesh, the symptoms of climate variations in coastal villages can be observed from various studies. The Andhra Pradesh coast is known its frequent tropical cyclones and associated floods and tidal surges causing loss of life and property in the region. In the last decade alone, the state experienced 18 devastating storms causing huge loss to coastal people. The year 2007 was the fourth warmest year on record since 1901 and 2009 witnessed the heat wave conditions prevailing over the coastal Andhra Pradesh. With regarding to sea level rise (SLR), 43 percent of the coastal areas considered to be at high risk. The main objectives of the study are: to know the perceptions of fisher people on climate variations and to find out the awareness of the fisher people on climate variations and its effects at village and on fishing households. Altogether 150 households were chosen purposively for this study and collected information from the households based on semi-structured schedule. The present field-based study observed that most of the fisher people are experienced about the changes in climate variations in their villages. The first generation fisher people expressed that the at least 1/2km of sea erosion taken place from the last 20 years and most of them displaced. With regard to fishing activities, first generation fisher people revealed that 20 years back they were fishing in near-shore areas, but now availability of near shore is decreased at a large extent. The present study observed the lot of variations in growth of species in marine districts of Andhra Pradesh from the year 2005-2010. Some species like Silver pomfret, Sole (flat fish), Chriocentrus, Thrisocies, Stakes, Rays etc. are in decaling. The results of the study indicate that huge variation observed in growth rates of fish species. Small and traditional fishers have drastically effected in El NiNo years than the normal years as they have not own suitable equipment such as crafts and nets. The study discovered that many changes taken place in the fishing activities and they are: go for long distance for fishing which increases the cost of fishing operations; decrease in fish catches. Need to take up in-depth studies in the marine villages and tackle the situation by creating more awareness about the negative effects of climate variations among fishing households. Suitable fish craft technology is to be supplied and create more employment opportunities for the fishers in other than fishery.Keywords: climate, Andhra Pradesh, El nino years, India
Procedia PDF Downloads 4218588 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 1788587 The Critical Relevance of Credit and Debt Data in Household Food Security Analysis: The Risks of Ineffective Response Actions
Authors: Siddharth Krishnaswamy
Abstract:
Problem Statement: Currently, when analyzing household food security, the most commonly studied food access indicators are household income and expenditure. Larger studies do take into account other indices such as credit and employment. But these are baselines studies and by definition are conducted infrequently. Food security analysis for access is usually dedicated to analyzing income and expenditure indicators. And both these indicators are notoriously inconsistent. Yet this data can very often end up being the basis on which household food access is calculated; and by extension, be used for decision making. Objectives: This paper argues that along with income and expenditure, credit and debit information should be collected so that an accurate analysis of household food security (and in particular) food access can be determined. The lack of collection and analysis of this information routinely means that there is often a “masking” of the actual situation; a household’s food access and food availability patterns may be adequate mainly as a result of borrowing and may even be due to a long- term dependency (a debt cycle). In other words, such a household is, in reality, worse off than it appears a factor masked by its performance on basic access indicators. Procedures/methodologies/approaches: Existing food security data sets collected in 2005 in Azerbaijan, 2010 across Myanmar and 2014-15 across Uganda were used to support the theory that analyzing income and expenditure of a HHs and analyzing the same in addition to data on credit & borrowing patterns will result in an entirely different scenario of food access of the household. Furthermore, the data analyzed depicts food consumption patterns across groups of households and then relates this to the extent of dependency on credit, i.e. households borrowing money in order to meet food needs. Finally, response options that were based on analyzing only income and expenditure; and response options based on income, expenditure, credit, and borrowing – from the same geographical area of operation are studied and discussed. Results: The purpose of this work was to see if existing methods of household food security analysis could be improved. It is hoped that food security analysts will collect household level information on credit and debit and analyze them against income, expenditure and consumption patterns. This will help determine if a household’s food access and availability are dependent on unsustainable strategies such as borrowing money for food or undertaking sustained debts. Conclusions: The results clearly show the amount of relevant information that is missing in Food Access analysis if debit and borrowing of the household is not analyzed along with the typical Food Access indicators that are usually analyzed. And the serious repercussions this has on Programmatic response and interventions.Keywords: analysis, food security indicators, response, resilience analysis
Procedia PDF Downloads 3318586 An Approach for Estimating Open Education Resources Textbook Savings: A Case Study
Authors: Anna Ching-Yu Wong
Abstract:
Introduction: Textbooks play a sizable portion of the overall cost of higher education students. It is a board consent that open education resources (OER) reduce the te4xtbook costs and provide students a way to receive high-quality learning materials at little or no cost to them. However, there is less agreement over exactly how much. This study presents an approach for calculating OER savings by using SUNY Canton NON-OER courses (N=233) to estimate the potentially textbook savings for one semester – Fall 2022. The purpose in collecting data is to understand how much potentially saved from using OER materials and to have a record for future further studies. Literature Reviews: In the past years, researchers identified the rising cost of textbooks disproportionately harm students in higher education institutions and how much an average cost of a textbook. For example, Nyamweya (2018) found that on average students save $116.94 per course when OER adopted in place of traditional commercial textbooks by using a simple formula. Student PIRGs (2015) used reports of per-course savings when transforming a course from using a commercial textbook to OER to reach an estimate of $100 average cost savings per course. Allen and Wiley (2016) presented at the 2016 Open Education Conference on multiple cost-savings studies and concluded $100 was reasonable per-course savings estimates. Ruth (2018) calculated an average cost of a textbook was $79.37 per-course. Hilton, et al (2014) conducted a study with seven community colleges across the nation and found the average textbook cost to be $90.61. There is less agreement over exactly how much would be saved by adopting an OER course. This study used SUNY Canton as a case study to create an approach for estimating OER savings. Methodology: Step one: Identify NON-OER courses from UcanWeb Class Schedule. Step two: View textbook lists for the classes (Campus bookstore prices). Step three: Calculate the average textbook prices by averaging the new book and used book prices. Step four: Multiply the average textbook prices with the number of students in the course. Findings: The result of this calculation was straightforward. The average of a traditional textbooks is $132.45. Students potentially saved $1,091,879.94. Conclusion: (1) The result confirms what we have known: Adopting OER in place of traditional textbooks and materials achieves significant savings for students, as well as the parents and taxpayers who support them through grants and loans. (2) The average textbook savings for adopting an OER course is variable depending on the size of the college and as well as the number of enrollment students.Keywords: textbook savings, open textbooks, textbook costs assessment, open access
Procedia PDF Downloads 758585 The Differential Impacts of Shame and Guilt on Father Involvement in Families with Special Needs Children
Authors: Lo Kai Chung
Abstract:
Fathers in the family of disabled children play a crucial role in fostering child development. Previous studies addressing emotions of father involvement in rearing children with special needs have been rare. With reference to the cultural orientation and masculine idea of Chinese fathers, shame and guilt are probable causal emotions that affect fathers’ psycho-behavioral reactions and, thus, father involvement. Based on the findings of our earlier qualitative studies, the current study aims to develop and validate a multi-item scale of guilt or shame and explore their relations with and fatherhood in families with children with special needs. A model is proposed to understand the roles that shame and guilt play in affecting fathers’ involvement in their family system. The severity and type of the child’s special needs are regarded as independent variables affecting the father’s emotional responses – shame and guilt. It is hypothesized that shame and guilt, under the influence of masculinity, lead to avoidance and compensation, respectively, which subsequently decrease and increase father involvement with children with special needs. A cross-sectional online questionnaire survey of fathers with children with special needs recruited by convenience sampling was conducted. Potential participants were reached by bulk emails, related groups on the Internet and education/social services providers. Totally 537 valid sets of online questionnaires were collected from fathers of children with special needs. EFA on the items pool of shame and guilt was performed, resulting in an x-item single-factor solution and y-item single-factor solution, respectively. Further path model analysis revealed that shame and guilt, under the influence of masculinity, showed differential avoidance and compensation responses and resulted in a decrease and increase in father involvement with special needs children. Demographic and key confounding variables were controlled in the analysis. The shame and guilt scales developed show good psychometric properties. Furthermore, they showed significant differential impacts, under the influence of masculinity, on avoidance and compensation behaviours, consequently resulting in a decrease/increase in father involvement in the expected directions. The findings have important theoretical and practical implications. At the community and policy level, the findings inform the design of strategies for strengthening the role of men in families with special needs children.Keywords: emotions, father involvement, guilt, shame, special needs
Procedia PDF Downloads 718584 Determinants of Conference Service Quality as Perceived by International Attendees
Authors: Shiva Hashemi, Azizan Marzuki, S. Kiumarsi
Abstract:
In recent years, conference destinations have been highly competitive; therefore, it is necessary to know about the behaviours of conference participants such as the process of their decision-making and the assessment of perceived conference quality. A conceptual research framework based on the Theory of Planned Behaviour model is presented in this research to get better understanding factors that influence it. This research study highlights key factors presented in previous studies in which behaviour intentions of participants are affected by the quality of conference. Therefore, this study is believed to provide an idea that conference participants should be encouraged to contribute to the quality and behaviour intention of the conference.Keywords: conference, attendees, service quality, perceives value, trust, behavioural intention.
Procedia PDF Downloads 3188583 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry
Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters
Abstract:
Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling
Procedia PDF Downloads 4118582 Constraints on IRS Control: An Alternative Approach to Tax Gap Analysis
Authors: J. T. Manhire
Abstract:
A tax authority wants to take actions it knows will foster the greatest degree of voluntary taxpayer compliance to reduce the “tax gap.” This paper suggests that even if a tax authority could attain a state of complete knowledge, there are constraints on whether and to what extent such actions would result in reducing the macro-level tax gap. These limits are not merely a consequence of finite agency resources. They are inherent in the system itself. To show that this is one possible interpretation of the tax gap data, the paper formulates known results in a different way by analyzing tax compliance as a population with a single covariate. This leads to a standard use of the logistic map to analyze the dynamics of non-compliance growth or decay over a sequence of periods. This formulation gives the same results as the tax gap studies performed over the past fifty years in the U.S. given the published margins of error. Limitations and recommendations for future work are discussed, along with some implications for tax policy.Keywords: income tax, logistic map, tax compliance, tax law
Procedia PDF Downloads 1208581 Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Authors: I. Valdez, J. Perdomo, M. Colindres, N. Castro
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.Keywords: control, DC motor, discrete PID, discrete state feedback
Procedia PDF Downloads 2678580 Communication of Expected Survival Time to Cancer Patients: How It Is Done and How It Should Be Done
Authors: Geir Kirkebøen
Abstract:
Most patients with serious diagnoses want to know their prognosis, in particular their expected survival time. As part of the informed consent process, physicians are legally obligated to communicate such information to patients. However, there is no established (evidence based) ‘best practice’ for how to do this. The two questions explored in this study are: How do physicians communicate expected survival time to patients, and how should it be done? We explored the first, descriptive question in a study with Norwegian oncologists as participants. The study had a scenario and a survey part. In the scenario part, the doctors should imagine that a patient, recently diagnosed with a serious cancer diagnosis, has asked them: ‘How long can I expect to live with such a diagnosis? I want an honest answer from you!’ The doctors should assume that the diagnosis is certain, and that from an extensive recent study they had optimal statistical knowledge, described in detail as a right-skewed survival curve, about how long such patients with this kind of diagnosis could be expected to live. The main finding was that very few of the oncologists would explain to the patient the variation in survival time as described by the survival curve. The majority would not give the patient an answer at all. Of those who gave an answer, the typical answer was that survival time varies a lot, that it is hard to say in a specific case, that we will come back to it later etc. The survey part of the study clearly indicates that the main reason why the oncologists would not deliver the mortality prognosis was discomfort with its uncertainty. The scenario part of the study confirmed this finding. The majority of the oncologists explicitly used the uncertainty, the variation in survival time, as a reason to not give the patient an answer. Many studies show that patients want realistic information about their mortality prognosis, and that they should be given hope. The question then is how to communicate the uncertainty of the prognosis in a realistic and optimistic – hopeful – way. Based on psychological research, our hypothesis is that the best way to do this is by explicitly describing the variation in survival time, the (usually) right skewed survival curve of the prognosis, and emphasize to the patient the (small) possibility of being a ‘lucky outlier’. We tested this hypothesis in two scenario studies with lay people as participants. The data clearly show that people prefer to receive expected survival time as a median value together with explicit information about the survival curve’s right skewedness (e.g., concrete examples of ‘positive outliers’), and that communicating expected survival time this way not only provides people with hope, but also gives them a more realistic understanding compared with the typical way expected survival time is communicated. Our data indicate that it is not the existence of the uncertainty regarding the mortality prognosis that is the problem for patients, but how this uncertainty is, or is not, communicated and explained.Keywords: cancer patients, decision psychology, doctor-patient communication, mortality prognosis
Procedia PDF Downloads 3298579 Criminal Justice Debt Cause-Lawyering: An Analysis of Reform Strategies
Authors: Samuel Holder
Abstract:
Mass incarceration in the United States is a human rights issue, not merely a civil rights problem. It is a human rights problem not only because the United States has a high rate of incarceration, but more importantly because of who is jailed, for what purpose they are jailed and, ultimately, the manner in which they are jailed. To sustain the scale of the criminal justice system, one of the darker policies involves a multi-tiered strategy of fee- and fine-collection, targeting, usually, the most vulnerable and poor, many of whom run into the law via small offenses that do not rise to the level of felonies. This paper advances the notion that this debt collection-to-incarceration pipeline is tantamount to a modern-day debtors’ prison system. This article seeks to confront the thorny issue of incarceration via criminal justice debt from a human rights and cause-lawyering position. It will argue that a two-pronged cause-lawyering strategy: the first focused on traditional litigation along constitutional grounds, and the second, an advocacy approach rooted in grassroots campaigns, designed to shift the normative operation and understanding of the rights of marginalized and racialized offenders. Ultimately, the argument suggests that this approach will be effective in combatting the (often highly privatized) criminal justice debt system and bring the roles of 'incapacitation, rehabilitation, deterrence, and retribution' back into the criminal justice legal conversation. Part I contextualizes and historicizes the role of fees, penalties, and fines in American criminal justice. Part II examines the emergence of private industry in the criminal justice system, and its role in the acceleration of profit-driven criminal justice debt collection and incarceration. Part III addresses the failures of the federal and state law and legislation in combatting predatory incarceration and debt collection in the criminal justice system, particularly as waged against the indigent and/or ethnically or racially marginalized. Part IV examines the potential for traditional cause-lawyering litigation along constitutional grounds, using case studies across contexts for illustration. Finally, Part V will review the radical cause-lawyer’s role in the normative struggle in redefining prisoners’ rights and the rights of the marginalized (and racialized) as they intersect at the crossroads of criminal justice debt. This paper will conclude with recommendations for litigation and advocacy, drawing on hypotheses advanced, and informed by case studies from a variety of both national and international jurisdictions.Keywords: cause-lawyering, criminal justice debt, human rights, judicial fees
Procedia PDF Downloads 1658578 New Standardized Framework for Developing Mobile Applications (Based On Real Case Studies and CMMI)
Authors: Ammar Khader Almasri
Abstract:
The software processes play a vital role for delivering a high quality software system that meets the user’s needs. There are many software development models which are used by most system developers, which can be categorized into two categories (traditional and new methodologies). Mobile applications like other desktop applications need appropriate and well-working software development process. Nevertheless, mobile applications have different features which limit their performance and efficiency like application size, mobile hardware features. Moreover, this research aims to help developers in using a standardized model for developing mobile applications.Keywords: software development process, agile methods , moblile application development, traditional methods
Procedia PDF Downloads 3878577 IP Management Tools, Strategies, Best Practices, and Business Models for Pharmaceutical Products
Authors: Nerella Srinivas
Abstract:
This study investigates the role of intellectual property (IP) management in pharmaceutical development, focusing on tools, strategies, and business models for leveraging IP effectively. Using a mixed-methods approach, we conducted case studies and qualitative analyses of IP management frameworks within the pharmaceutical sector. Our methodology included a review of IP tools tailored for pharmaceutical applications, strategic IP models for maximizing competitive advantages, and best practices for organizational efficiency. Findings emphasize the importance of understanding IP law and adopting adaptive strategies, illustrating how IP management can drive industry growth.Keywords: intellectual property management, pharmaceutical products, IP tools, IP strategies, best practices, business models, innovation
Procedia PDF Downloads 168576 Effects of Starvation, Glucose Treatment and Metformin on Resistance in Chronic Myeloid Leukemia Cells
Authors: Nehir Nebioglu
Abstract:
Chemotherapy is widely used for the treatment of cancer. Doxorubicin is an anti-cancer chemotherapy drug that is classified as an anthracycline antibiotic. Antitumor antibiotics consist of natural products produced by species of the soil fungus Streptomyces. These drugs act in multiple phases of the cell cycle and are known cell-cycle specific. Although DOX is a precious clinical antineoplastic agent, resistance is also a problem that limits its utility besides cardiotoxicity problem. The drug resistance of cancer cells results from multiple factors including individual variation, genetic heterogeneity within a tumor, and cellular evolution. The mechanism of resistance is thought to involve, in particular, ABCB1 (MDR1, Pgp) and ABCC1 (MRP1) as well as other transporters. Several studies on DOX-resistant cell lines have shown that resistance can be overcome by an inhibition of ABCB1, ABCC1, and ABCC2. This study attempts to understand the effects of different concentration levels of glucose treatment and starvation on the proliferation of Doxorubicin resistant cancer cells lines. To understand the effect of starvation, K562/Dox and K562 cell lines were treated with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM Dox concentrations in both starvation and normal medium conditions. In addition to this, to interpret the effect of glucose treatment, different concentrations (0, 1 mM, 5 mM, 25 mM) of glucose were applied to Dox-treated (with 0, 5 nM, 50 nM, 500 nM, 5 uM and 50 uM) K562/Dox and K652 cell lines. All results show significant decreasing in the cell count of K562/Dox, when cells were starved. However, while proliferation of K562/Dox lines decrease is associated with the increasingly applied Dox concentration, K562/Dox starved ones remain at the same proliferation level. Thus, the results imply that an amount of K562/Dox lines gain starvation resistance and remain resistant. Furthermore, for K562/Dox, there is no clear effect of glucose treatment in terms of cell proliferation. In the presence of a moderate level of glucose (5 mM), proliferation increases compared to other concentration of glucose for each different Dox application. On the other hand, a significant increase in cell proliferation in moderate level of glucose is only observed in 5 uM Dox concentration. The moderate concentration level of Dox can be examined in further studies. For the high amount of glucose (25 mM), cell proliferation levels are lower than moderate glucose application. The reason could be high amount of glucose may not be absorbable by cells. Also, in the presence of low amount of glucose, proliferation is decreasing in an orderly manner of increase in Dox concentration. This situation can be explained by the glucose depletion -Warburg effect- in the literature.Keywords: drug resistance, cancer cells, chemotherapy, doxorubicin
Procedia PDF Downloads 1768575 Fears of Strangers: Causes of Anonymity Rejection on Virtual World
Authors: Proud Arunrangsiwed
Abstract:
This research is a collaborative narrative research, which is mixed with issues of selected papers and researcher's experience as an anonymous user on social networking sites. The objective of this research is to understand the reasons of the regular users who reject to contact with anonymous users, and to study the communication traditions used in the selected studies. Anonymous users are rejected by regular users, because of the fear of cyber bully, the fear of unpleasant behaviors, and unwillingness of changing communication norm. The suggestion for future research design is to use longitudinal design or quantitative design; and the theory in rhetorical tradition should be able to help develop a strong trust message.Keywords: anonymous, anonymity, online identity, trust message, reliability
Procedia PDF Downloads 3598574 Data Mining in Healthcare for Predictive Analytics
Authors: Ruzanna Muradyan
Abstract:
Medical data mining is a crucial field in contemporary healthcare that offers cutting-edge tactics with enormous potential to transform patient care. This abstract examines how sophisticated data mining techniques could transform the healthcare industry, with a special focus on how they might improve patient outcomes. Healthcare data repositories have dynamically evolved, producing a rich tapestry of different, multi-dimensional information that includes genetic profiles, lifestyle markers, electronic health records, and more. By utilizing data mining techniques inside this vast library, a variety of prospects for precision medicine, predictive analytics, and insight production become visible. Predictive modeling for illness prediction, risk stratification, and therapy efficacy evaluations are important points of focus. Healthcare providers may use this abundance of data to tailor treatment plans, identify high-risk patient populations, and forecast disease trajectories by applying machine learning algorithms and predictive analytics. Better patient outcomes, more efficient use of resources, and early treatments are made possible by this proactive strategy. Furthermore, data mining techniques act as catalysts to reveal complex relationships between apparently unrelated data pieces, providing enhanced insights into the cause of disease, genetic susceptibilities, and environmental factors. Healthcare practitioners can get practical insights that guide disease prevention, customized patient counseling, and focused therapies by analyzing these associations. The abstract explores the problems and ethical issues that come with using data mining techniques in the healthcare industry. In order to properly use these approaches, it is essential to find a balance between data privacy, security issues, and the interpretability of complex models. Finally, this abstract demonstrates the revolutionary power of modern data mining methodologies in transforming the healthcare sector. Healthcare practitioners and researchers can uncover unique insights, enhance clinical decision-making, and ultimately elevate patient care to unprecedented levels of precision and efficacy by employing cutting-edge methodologies.Keywords: data mining, healthcare, patient care, predictive analytics, precision medicine, electronic health records, machine learning, predictive modeling, disease prognosis, risk stratification, treatment efficacy, genetic profiles, precision health
Procedia PDF Downloads 638573 Design Optimization of the Primary Containment Building of a Pressurized Water Reactor
Authors: M. Hossain, A. H. Khan, M. A. R. Sarkar
Abstract:
Primary containment structure is one of the five safety layers of a nuclear facility which is needed to be designed in such a manner that it can withstand the pressure and excessive radioactivity during accidental situations. It is also necessary to ensure minimization of cost with maximum possible safety in order to make the design economically feasible and attractive. This paper attempts to identify the optimum design conditions for primary containment structure considering both mechanical and radiation safety keeping the economic aspects in mind. This work takes advantage of commercial simulation software to identify the suitable conditions without the requirement of costly experiments. Generated data may be helpful for further studies.Keywords: PWR, concrete containment, finite element approach, neutron attenuation, Von Mises stress
Procedia PDF Downloads 1878572 Superparamagnetic Core Shell Catalysts for the Environmental Production of Fuels from Renewable Lignin
Authors: Cristina Opris, Bogdan Cojocaru, Madalina Tudorache, Simona M. Coman, Vasile I. Parvulescu, Camelia Bala, Bahir Duraki, Jeroen A. Van Bokhoven
Abstract:
The tremendous achievements in the development of the society concretized by more sophisticated materials and systems are merely based on non-renewable resources. Consequently, after more than two centuries of intensive development, among others, we are faced with the decrease of the fossil fuel reserves, an increased impact of the greenhouse gases on the environment, and economic effects caused by the fluctuations in oil and mineral resource prices. The use of biomass may solve part of these problems, and recent analyses demonstrated that from the perspective of the reduction of the emissions of carbon dioxide, its valorization may bring important advantages conditioned by the usage of genetic modified fast growing trees or wastes, as primary sources. In this context, the abundance and complex structure of lignin may offer various possibilities of exploitation. However, its transformation in fuels or chemicals supposes a complex chemistry involving the cleavage of C-O and C-C bonds and altering of the functional groups. Chemistry offered various solutions in this sense. However, despite the intense work, there are still many drawbacks limiting the industrial application. Thus, the proposed technologies considered mainly homogeneous catalysts meaning expensive noble metals based systems that are hard to be recovered at the end of the reaction. Also, the reactions were carried out in organic solvents that are not acceptable today from the environmental point of view. To avoid these problems, the concept of this work was to investigate the synthesis of superparamagnetic core shell catalysts for the fragmentation of lignin directly in the aqueous phase. The magnetic nanoparticles were covered with a nanoshell of an oxide (niobia) with a double role: to protect the magnetic nanoparticles and to generate a proper (acidic) catalytic function and, on this composite, cobalt nanoparticles were deposed in order to catalyze the C-C bond splitting. With this purpose, we developed a protocol to prepare multifunctional and magnetic separable nano-composite Co@Nb2O5@Fe3O4 catalysts. We have also established an analytic protocol for the identification and quantification of the fragments resulted from lignin depolymerization in both liquid and solid phase. The fragmentation of various lignins occurred on the prepared materials in high yields and with very good selectivity in the desired fragments. The optimization of the catalyst composition indicated a cobalt loading of 4wt% as optimal. Working at 180 oC and 10 atm H2 this catalyst allowed a conversion of lignin up to 60% leading to a mixture containing over 96% in C20-C28 and C29-C37 fragments that were then completely fragmented to C12-C16 in a second stage. The investigated catalysts were completely recyclable, and no leaching of the elements included in the composition was determined by inductively coupled plasma optical emission spectrometry (ICP-OES).Keywords: superparamagnetic core-shell catalysts, environmental production of fuels, renewable lignin, recyclable catalysts
Procedia PDF Downloads 328