Search results for: option selection
815 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector
Authors: Roopa Singh, Anurag Singh, Ajay
Abstract:
Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector
Procedia PDF Downloads 353814 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models
Authors: Ainouna Bouziane
Abstract:
The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.Keywords: electron tomography, supported catalysts, nanometrology, error assessment
Procedia PDF Downloads 88813 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia
Authors: Carol Anne Hargreaves
Abstract:
A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.Keywords: machine learning, stock market trading, logistic regression, cluster analysis, factor analysis, decision trees, neural networks, automated stock investment system
Procedia PDF Downloads 157812 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 241811 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach
Authors: Shyaka Eugene
Abstract:
The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation
Procedia PDF Downloads 63810 Practical Modelling of RC Structural Walls under Monotonic and Cyclic Loading
Authors: Reza E. Sedgh, Rajesh P. Dhakal
Abstract:
Shear walls have been used extensively as the main lateral force resisting systems in multi-storey buildings. The recent development in performance based design urges practicing engineers to conduct nonlinear static or dynamic analysis to evaluate seismic performance of multi-storey shear wall buildings by employing distinct analytical models suggested in the literature. For practical purpose, application of macroscopic models to simulate the global and local nonlinear behavior of structural walls outweighs the microscopic models. The skill level, computational time and limited access to RC specialized finite element packages prevents the general application of this method in performance based design or assessment of multi-storey shear wall buildings in design offices. Hence, this paper organized to verify capability of nonlinear shell element in commercially available package (Sap2000) in simulating results of some specimens under monotonic and cyclic loads with very oversimplified available cyclic material laws in the analytical tool. The selection of constitutive models, the determination of related parameters of the constituent material and appropriate nonlinear shear model are presented in detail. Adoption of proposed simple model demonstrated that the predicted results follow the overall trend of experimental force-displacement curve. Although, prediction of ultimate strength and the overall shape of hysteresis model agreed to some extent with experiment, the ultimate displacement(significant strength degradation point) prediction remains challenging in some cases.Keywords: analytical model, nonlinear shell element, structural wall, shear behavior
Procedia PDF Downloads 404809 Ability of Gastric Enzyme Extract of Adult Camel to Clot Bovine Milk
Authors: Boudjenah-Haroun Saliha, Isselnane Souad, Nouani Abdelwahab, Baaissa Babelhadj, Mati Abderrahmane
Abstract:
Algeria is experiencing significant development of the dairy sector, where consumption of milk and milk products increased by 2.7 million tons in 2008 to 4,400,000 tons in 2013, and cheese production has reached 1640 tons in the year 2014 with average consumption of 0.7 kg/person/year. Although rennet is still the most used coagulating enzyme in cheese, its production has been growing worldwide shortage. This shortage is primarily due to a growing increase in the production and consumption of cheese, and the inability to increase in parallel the production of rennet. This shortage has caused very large fluctuations in its price). To overcome these obstacles, much research has been undertaken to find effective and competitive substitutes used industrially. For this, the selection of a local production of rennet substitute is desirable. It would allow a permanent supply with limited dependence on imports and price fluctuations. Investigations conducted by our research team showed that extracts coagulants from the stomachs of older camels are characterized by a coagulating power than those from younger camels. The objective of this work is to study the possibility of substituting commercial rennet coagulant by gastric enzymes from adult camels for coagulation bovine milk. Excerpts from the raw camel coagulants obtained are characterized through their teneures proteins and clotting and proteolytic activities. Milk clotting conditions by the action of these extracts were optimized. Milk clotting time all treated with enzyme preparations and under different conditions was calculated. Bovine rennet has been used for comparison. The results show that crude extracts from gastric adult camel can be good substituting bovine rennet.Keywords: Algeria, camel, cheese, coagulation, gastric extracts, milk
Procedia PDF Downloads 441808 Integrated Risk Management in The Supply Chain of Essential Medicines in Zambia
Authors: Mario M. J. Musonda
Abstract:
Access to health care is a human right, which includes having timely access to affordable and quality essential medicines at the right place and in sufficient quantity. However, inefficient public sector supply chain management contributes to constant shortages of essential medicines at health facilities. Literature review involved a desktop study of published research studies and reports on risk management, supply chain management of essential medicines and their integration to increase the efficiency of the latter. The research was conducted on a sample population of offices under Ministry of Health Headquarters, Lusaka Provincial and District Offices, selected health facilities in Lusaka, Medical Stores Limited, Zambia Medicines Regulatory Authority and Cooperating Partners. Individuals involved in study were selected judgmentally by their functions under selection and quantification, regulation, procurement, storage, distribution, quality assurance, and dispensing of essential medicines. Structured interviews and discussions were held with selected experts and self-administered questionnaires were distributed. Collected and analysed data of 35 returned and usable questionnaires from the 50 distributed. The highest prioritised risks were; inadequate and inconsistent fund disbursements, weak information management systems, weak quality management systems and insufficient resources (HR and infrastructure) among others. The results for this research can be used to increase the efficiency of the public sector supply chain of essential medicines and other pharmaceuticals. The results of the study showed that there is need to implement effective risk management systems by participating institutions and organisations to increase the efficiency of the entire supply chain in order to avoid and/or reduce shortages of essential medicines at health facilities.Keywords: essential medicine, risk assessment, risk management, supply chain, supply chain risk management
Procedia PDF Downloads 443807 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 132806 A Method for Multimedia User Interface Design for Mobile Learning
Authors: Shimaa Nagro, Russell Campion
Abstract:
Mobile devices are becoming ever more widely available, with growing functionality, and are increasingly used as an enabling technology to give students access to educational material anytime and anywhere. However, the design of educational material user interfaces for mobile devices is beset by many unresolved research issues such as those arising from emphasising the information concepts then mapping this information to appropriate media (modelling information then mapping media effectively). This report describes a multimedia user interface design method for mobile learning. The method covers specification of user requirements and information architecture, media selection to represent the information content, design for directing attention to important information, and interaction design to enhance user engagement based on Human-Computer Interaction design strategies (HCI). The method will be evaluated by three different case studies to prove the method is suitable for application to different areas / applications, these are; an application to teach about major computer networking concepts, an application to deliver a history-based topic; (after these case studies have been completed, the method will be revised to remove deficiencies and then used to develop a third case study), an application to teach mathematical principles. At this point, the method will again be revised into its final format. A usability evaluation will be carried out to measure the usefulness and effectiveness of the method. The investigation will combine qualitative and quantitative methods, including interviews and questionnaires for data collection and three case studies for validating the MDMLM method. The researcher has successfully produced the method at this point which is now under validation and testing procedures. From this point forward in the report, the researcher will refer to the method using the MDMLM abbreviation which means Multimedia Design Mobile Learning Method.Keywords: human-computer interaction, interface design, mobile learning, education
Procedia PDF Downloads 246805 Transformation of Hexagonal Cells into Auxetic in Core Honeycomb Furniture Panels
Authors: Jerzy Smardzewski
Abstract:
Structures with negative Poisson's ratios are called auxetic. They are characterized by better mechanical properties than conventional structures, especially shear strength, the ability to better absorb energy and increase strength during bending, especially in sandwich panels. Commonly used paper cores of cellular boards are made of hexagonal cells. With isotropic facings, these cells provide isotropic properties of the entire furniture board. Shelves made of such panels with a thickness similar to standard chipboards do not provide adequate stiffness and strength of the furniture. However, it is possible to transform the shape of hexagonal cells into polyhedral auxetic cells that improve the mechanical properties of the core. The work aimed to transform the hexagonal cells of the paper core into auxetic cells and determine their basic mechanical properties. Using numerical methods, it was decided to design the most favorable proportions of cells distinguished by the lowest Poisson's ratio and the highest modulus of linear elasticity. Standard cores for cellular boards commonly used to produce 34 mm thick furniture boards were used for the tests. Poisson's ratios, bending strength, and linear elasticity moduli were determined for such cores and boards. Then, the cells were transformed into auxetic structures, and analogous cellular boards were made for which mechanical properties were determined. The results of numerical simulations for which the variable parameters were the dimensions of the cell walls, wall inclination angles, and relative cell density were presented in the further part of the paper. Experimental tests and numerical simulations showed the beneficial effect of auxeticization on the mechanical quality of furniture panels. They allowed for the selection of the optimal shape of auxetic core cells.Keywords: auxetics, honeycomb, panels, simulation, experiment
Procedia PDF Downloads 12804 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 357803 Exploration Tools for Tantalum-Bearing Pegmatites along Kibara Belt, Central and Southwestern Uganda
Authors: Sadat Sembatya
Abstract:
Tantalum metal is used in addressing capacitance challenge in the 21st-century technology growth. Tantalum is rarely found in its elemental form. Hence it’s often found with niobium and the radioactive elements of thorium and uranium. Industrial processes are required to extract pure tantalum. Its deposits are mainly oxide associated and exist in Ta-Nb oxides such as tapiolite, wodginite, ixiolite, rutile and pyrochlore-supergroup minerals are of minor importance. The stability and chemical inertness of tantalum makes it a valuable substance for laboratory equipment and a substitute for platinum. Each period of Tantalum ore formation is characterized by specific mineralogical and geochemical features. Compositions of Columbite-Group Minerals (CGM) are variable: Fe-rich types predominate in the Man Shield (Sierra Leone), the Congo Craton (DR Congo), the Kamativi Belt (Zimbabwe) and the Jos Plateau (Nigeria). Mn-rich columbite-tantalite is typical of the Alto Ligonha Province (Mozambique), the Arabian-Nubian Shield (Egypt, Ethiopia) and the Tantalite Valley pegmatites (southern Namibia). There are large compositional variations through Fe-Mn fractionation, followed by Nb-Ta fractionation. These are typical for pegmatites usually associated with very coarse quartz-feldspar-mica granites. They are young granitic systems of the Kibara Belt of Central Africa and the Older Granites of Nigeria. Unlike ‘simple’ Be-pegmatites, most Ta-Nb rich pegmatites have the most complex zoning. Hence we need systematic exploration tools to find and rapidly assess the potential of different pegmatites. The pegmatites exist as known deposits (e.g., abandoned mines) and the exposed or buried pegmatites. We investigate rocks and minerals to trace for the possibility of the effect of hydrothermal alteration mainly for exposed pegmatites, do mineralogical study to prove evidence of gradual replacement and geochemistry to report the availability of trace elements which are good indicators of mineralisation. Pegmatites are not good geophysical responders resulting to the exclusion of the geophysics option. As for more advanced prospecting, we bulk samples from different zones first to establish their grades and characteristics, then make a pilot test plant because of big samples to aid in the quantitative characterization of zones, and then drill to reveal distribution and extent of different zones but not necessarily grade due to nugget effect. Rapid assessment tools are needed to assess grade and degree of fractionation in order to ‘rule in’ or ‘rule out’ a given pegmatite for future work. Pegmatite exploration is also unique, high risk and expensive hence right traceability system and certification for 3Ts are highly needed.Keywords: exploration, mineralogy, pegmatites, tantalum
Procedia PDF Downloads 150802 A Deluge of Disaster, Destruction, Death and Deception: Negative News and Empathy Fatigue in the Digital Age
Authors: B. N. Emenyeonu
Abstract:
Initially identified as sensationalism in the eras of yellow journalism and tabloidization, the inclusion of news which shocks or provokes strong emotional responses among readers, viewers, and browsers has not only remained a persistent feature of journalism but has also seemingly escalated in the current climate of digital and social media. Whether in the relentless revelation of scandals in high places, profiles on people displaced by sporadic wars or natural disasters, gruesome accounts of trucks plowing into pedestrians in a city centre, or the coverage of mourners paying tributes to victims of a mass shooting, mainstream, and digital media are often awash with tragedy, tears, and trauma. While it may aim at inspiring sympathy, outrage, or even remedial reactions, it would appear that the deluge of grief and misery in the news merely generates in the audience a feeling that borders on hearing or seeing too much to care or act. This feeling also appears to be accentuated by the dizzying diffusion of social media news and views, most of whose authenticity is not easily verifiable. Through a survey of 400 regular consumers of news and an in-depth interview of 10 news managers in selected media organizations across the Middle East, this study therefore investigates public attitude to the profusion of bad news in mainstream and digital media. Among other targets, it examines whether the profusion of bad news generates empathy fatigue among the audience and, if so, whether there is any association between biographic variables (profession, age, and gender) and an inclination to empathy fatigue. It also seeks to identify which categories of bad news and media are most likely to drag the audience into indifference. In conclusion, the study discusses the implications of the findings for mass-mediated advocacies such as campaigns against conflicts, corruption, nuclear threats, terrorism, gun violence, sexual crimes, and human trafficking, among other threats to humanity.Keywords: digital media, empathy fatigue, media campaigns, news selection
Procedia PDF Downloads 58801 3D Modeling Approach for Cultural Heritage Structures: The Case of Virgin of Loreto Chapel in Cusco, Peru
Authors: Rony Reátegui, Cesar Chácara, Benjamin Castañeda, Rafael Aguilar
Abstract:
Nowadays, heritage building information modeling (HBIM) is considered an efficient tool to represent and manage information of cultural heritage (CH). The basis of this tool relies on a 3D model generally obtained from a cloud-to-BIM procedure. There are different methods to create an HBIM model that goes from manual modeling based on the point cloud to the automatic detection of shapes and the creation of objects. The selection of these methods depends on the desired level of development (LOD), level of information (LOI), grade of generation (GOG), as well as on the availability of commercial software. This paper presents the 3D modeling of a stone masonry chapel using Recap Pro, Revit, and Dynamo interface following a three-step methodology. The first step consists of the manual modeling of simple structural (e.g., regular walls, columns, floors, wall openings, etc.) and architectural (e.g., cornices, moldings, and other minor details) elements using the point cloud as reference. Then, Dynamo is used for generative modeling of complex structural elements such as vaults, infills, and domes. Finally, semantic information (e.g., materials, typology, state of conservation, etc.) and pathologies are added within the HBIM model as text parameters and generic models families, respectively. The application of this methodology allows the documentation of CH following a relatively simple to apply process that ensures adequate LOD, LOI, and GOG levels. In addition, the easy implementation of the method as well as the fact of using only one BIM software with its respective plugin for the scan-to-BIM modeling process means that this methodology can be adopted by a larger number of users with intermediate knowledge and limited resources since the BIM software used has a free student license.Keywords: cloud-to-BIM, cultural heritage, generative modeling, HBIM, parametric modeling, Revit
Procedia PDF Downloads 144800 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts
Authors: Jinhui Bak, Bumjin Kim
Abstract:
The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM
Procedia PDF Downloads 158799 Body Types of Softball Players in the 39th National Games of Thailand
Authors: Nopadol Nimsuwan, Sumet Prom-in
Abstract:
The purpose of this study was to investigate the body types, size, and body compositions of softball players in the 39th National Games of Thailand. The population of this study was 352 softball players who participated in the 39th National Games of Thailand from which a sample size of 291 was determined using the Taro Yamane formula and selection is made with stratified sampling method. The data collected were weight, height, arm length, leg length, chest circumference, mid-upper arm circumference, calf circumference, subcutaneous fat in the upper arm area, the scapula bone area, above the pelvis area, and mid-calf area. Keys and Brozek formula was used to calculate the fat quantity, Kitagawa formula to calculate the muscle quantity, and Heath and Carter method was used to determine the values of body dimensions. The results of the study can be concluded as follows. The average body dimensions of the male softball players were the endo-mesomorph body type while the average body dimensions of female softball players were the meso-endomorph body type. When considered according to the softball positions, it was found that the male softball players in every position had the endo-mesomorph body type while the female softball players in every position had the meso-endomorph body type except for the center fielder that had the endo-ectomorph body type. The endo-mesomorph body type is suitable for male softball players, and the meso-endomorph body type is suitable for female softball players because these body types are suitable for the five basic softball skills which are: gripping, throwing, catching, hitting, and base running. Thus, people related to selecting softball players to play in sports competitions of different levels should consider factors in terms of body type, size, and body components of the players.Keywords: body types, softball players, national games of Thailand, social sustainability
Procedia PDF Downloads 484798 Transformer-Driven Multi-Category Classification for an Automated Academic Strand Recommendation Framework
Authors: Ma Cecilia Siva
Abstract:
This study introduces a Bidirectional Encoder Representations from Transformers (BERT)-based machine learning model aimed at improving educational counseling by automating the process of recommending academic strands for students. The framework is designed to streamline and enhance the strand selection process by analyzing students' profiles and suggesting suitable academic paths based on their interests, strengths, and goals. Data was gathered from a sample of 200 grade 10 students, which included personal essays and survey responses relevant to strand alignment. After thorough preprocessing, the text data was tokenized, label-encoded, and input into a fine-tuned BERT model set up for multi-label classification. The model was optimized for balanced accuracy and computational efficiency, featuring a multi-category classification layer with sigmoid activation for independent strand predictions. Performance metrics showed an F1 score of 88%, indicating a well-balanced model with precision at 80% and recall at 100%, demonstrating its effectiveness in providing reliable recommendations while reducing irrelevant strand suggestions. To facilitate practical use, the final deployment phase created a recommendation framework that processes new student data through the trained model and generates personalized academic strand suggestions. This automated recommendation system presents a scalable solution for academic guidance, potentially enhancing student satisfaction and alignment with educational objectives. The study's findings indicate that expanding the data set, integrating additional features, and refining the model iteratively could improve the framework's accuracy and broaden its applicability in various educational contexts.Keywords: tokenized, sigmoid activation, transformer, multi category classification
Procedia PDF Downloads 9797 Impact of Maternal Nationality on Caesarean Section Rate Variation in a High-income Country
Authors: Saheed Shittu, Lolwa Alansari, Fahed Nattouf, Tawa Olukade, Naji Abdallah, Tamara Alshdafat, Sarra Amdouni
Abstract:
Cesarean sections (CS), a highly regarded surgical intervention for improving fetal-maternal outcomes and serving as an integral part of emergency obstetric services, are not without complications. Although CS has many advantages, it poses significant risks to both mother and child and increases healthcare expenditures in the long run. The escalating global prevalence of CS, coupled with variations in rates among immigrant populations, has prompted an inquiry into the correlation between CS rates and the nationalities of women undergoing deliveries at Al-Wakra Hospital (AWH), Qatar's second-largest public maternity hospital. This inquiry is motivated by the notable CS rate of 36%, deemed high in comparison to the 34% recorded across other Hamad Medical Corporation (HMC) maternity divisions This is Qatar's first comprehensive investigation of Caesarean section rates and nationalities. A retrospective cross-sectional study was conducted, and data for all births delivered in 2019 were retrieved from the hospital's electronic medical records. The CS rate, the crude rate, and adjusted risks of Caesarean delivery for mothers from each nationality were determined. The common indications for CS were analysed based on nationality. The association between nationality and Caesarean rates was examined using binomial logistic regression analysis considering Qatari women as a standard reference group. The correlation between the CS rate in the country of nationality and the observed CS rate in Qatar was also examined using Pearson's correlation. This study included 4,816 births from 69 different nationalities. CS was performed in 1767 women, equating to 36.5%. The nationalities with the highest CS rates were Egyptian (49.6%), Lebanese (45.5%), Filipino and Indian (both 42.2%). Qatari women recorded a CS rate of 33.4%. The major indication for elective CS was previous multiple CS (39.9%) and one prior CS, where the patient declined vaginal birth after the cesarean (VBAC) option (26.8%). A distinct pattern was noticed: elective CS was predominantly performed on Arab women, whereas emergency CS was common among women of Asian and Sub-Saharan African nationalities. Moreover, a significant correlation was found between the CS rates in Qatar and the women's countries of origin. Also, a high CS rate was linked to instances of previous CS. As a result of these insights, strategic interventions were successfully implemented at the facility to mitigate unwarranted CS, resulting in a notable reduction in CS rate from 36.5% in 2019 to 34% in 2022. This proves the efficacy of the meticulously researched approach. The focus has now shifted to reducing primary CS rates and facilitating well-informed decisions regarding childbirth methods.Keywords: maternal nationality, caesarean section rate variation, migrants, high-income country
Procedia PDF Downloads 70796 Conceptualizing Clashing Values in the Field of Media Ethics
Authors: Saadia Izzeldin Malik
Abstract:
Lack of ethics is the crisis of the 21-century. Today’s global world is filled with economic, political, environmental, media/communication, and social crises that all generated by the eroding fabric of ethics and moral values that guide human’s decisions in all aspects of live. Our global world is guided by liberal western democratic principles and liberal capitalist economic principles that define and reinforce each other. In economic terms, capitalism has turned world economic systems into one market place of ideas and products controlled by big multinational corporations that not only determine the conditions and terms of commodity production and commodity exchange between countries, but also transform the political economy of media systems around the globe. The citizen (read the consumer) today is the target of persuasion by all types of media at a time when her/his interests should be, ethically and in principle, the basic significant factor in the selection of media content. It is very important in this juncture of clashing media values –professional and commercial- and wide spread ethical lapses of media organizations and media professionals to think of a perspective to theorize these conflicting values within a broader framework of media ethics. Thus, the aim of this paper is to, epistemologically, bring to the center a perspective on media ethics as a basis for reconciliation of clashing values of the media. The paper focuses on conflicting ethical values in current media debate; namely ownership of media vs. press freedom, individual right for privacy vs. public right to know, and global western consumerism values vs. media values. The paper concludes that a framework to reconcile conflicting values of media ethics should focus on the “individual” journalist and his/her moral development as well as focus on maintaining ethical principles of the media as an institution with a primary social responsibility for the “public” it serves.Keywords: ethics, media, journalism, social responsibility, conflicting values, global
Procedia PDF Downloads 492795 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 312794 Slope Stability Analysis and Evaluation of Road Cut Slope in Case of Goro to Abagada Road, Adama
Authors: Ezedin Geta Seid
Abstract:
Slope failures are among the common geo-environmental natural hazards in the hilly and mountainous terrain of the world causing damages to human life and destruction of infrastructures. In Ethiopia, the demand for the construction of infrastructures, especially highways and railways, has increased to connect the developmental centers. However, the failure of roadside slopes formed due to the difficulty of geographical locations is the major difficulty for this development. As a result, a comprehensive site-specific investigation of destabilizing agents and a suitable selection of slope profiles are needed during design. Hence, this study emphasized the stability analysis and performance evaluation of slope profiles (single slope, multi-slope, and benched slope). The analysis was conducted for static and dynamic loading conditions using limit equilibrium (slide software) and finite element method (Praxis software). The analysis results in selected critical sections show that the slope is marginally stable, with FS varying from 1.2 to 1.5 in static conditions, and unstable with FS below 1 in dynamic conditions. From the comparison of analysis methods, the finite element method provides more valuable information about the failure surface of a slope than limit equilibrium analysis. Performance evaluation of geometric profiles shows that geometric modification provides better and more economical slope stability. Benching provides significant stability for cut slopes (i.e., the use of 2m and 3m bench improves the factor of safety by 7.5% and 12% from a single slope profile). The method is more effective on steep slopes. Similarly, the use of a multi-slope profile improves the stability of the slope in stratified soil with varied strength. The performance is more significant when it is used in combination with benches. The study also recommends drainage control and slope reinforcement as a remedial measure for cut slopes.Keywords: slope failure, slope profile, bench slope, multi slope
Procedia PDF Downloads 32793 Empowering South African Female Farmers through Organic Lamb Production: A Cost Analysis Case Study
Authors: J. M. Geyser
Abstract:
Lamb is a popular meat throughout the world, particularly in Europe, the Middle East and Oceania. However, the conventional lamb industry faces challenges related to environmental sustainability, climate change, consumer health and dwindling profit margins. This has stimulated an increasing demand for organic lamb, as it is perceived to increase environmental sustainability, offer superior quality, taste, and nutritional value, which is appealing to farmers, including small-scale and female farmers, as it often commands a premium price. Despite its advantages, organic lamb production presents challenges, with a significant hurdle being the high production costs encompassing organic certification, lower stocking rates, higher mortality rates and marketing cost. These costs impact the profitability and competitiveness or organic lamb producers, particularly female and small-scale farmers, who often encounter additional obstacles, such as limited access to resources and markets. Therefore, this paper examines the cost of producing organic lambs and its impact on female farmers and raises the research question: “Is organic lamb production the saving grace for female and small-scale farmers?” Objectives include estimating and comparing production costs and profitability or organic lamb production with conventional lamb production, analyzing influencing factors, and assessing opportunities and challenges for female and small-scale farmers. The hypothesis states that organic lamb production can be a viable and beneficial option for female and small-scale farmers, provided that they can overcome high production costs and access premium markets. The study uses a mixed-method approach, combining qualitative and quantitative data. Qualitative data involves semi-structured interviews with ten female and small-scale farmers engaged in organic lamb production in South Africa. The interview covered topics such as farm characteristics, practices, cost components, mortality rates, income sources and empowerment indicators. Quantitative data used secondary published information and primary data from a female farmer. The research findings indicate that when a female farmer moves from conventional lamb production to organic lamb production, the cost in the first year of organic lamb production exceed those of conventional lamb production by over 100%. This is due to lower stocking rates and higher mortality rates in the organic system. However, costs start decreasing in the second year as stocking rates increase due to manure applications on grazing and lower mortality rates due to better worm resistance in the herd. In conclusion, this article sheds light on the economic dynamics of organic lamb production, particularly focusing on its impact on female farmers. To empower female farmers and to promote sustainable agricultural practices, it is imperative to understand the cost structures and profitability of organic lamb production.Keywords: cost analysis, empowerment, female farmers, organic lamb production
Procedia PDF Downloads 75792 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19
Authors: M. Bilal Ishfaq, Adnan N. Qureshi
Abstract:
COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.Keywords: COVID-19, feature engineering, artificial neural networks, radiology images
Procedia PDF Downloads 75791 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 354790 Effect of Cryogenic Pre-stretching on the Room Temperature Tensile Behavior of AZ61 Magnesium Alloy and Dominant Grain Growth Mechanisms During Subsequent Annealing
Authors: Umer Masood Chaudry, Hafiz Muhammad Rehan Tariq, Chung-soo Kim, Tea-sung Jun
Abstract:
This study explored the influence of pre-stretching temperature on the microstructural characteristics and deformation behavior of AZ61 magnesium alloy and its implications on grain growth during subsequent annealing. AZ61 alloy was stretched to 5% plastic strain along rolling (RD) and transverse direction (TD) at room (RT) and cryogenic temperature (-150 oC, CT) followed by annealing at 320 oC for 1 h to investigate the twinning and dislocation evolution and its consequent effect on the flow stress, plastic strain and strain hardening rate. Compared to RT-stretched samples, significant improvement in yield stress, strain hardening rate and moderate reduction in elongation to failure were witnessed for CT-stretched samples along RD and TD. The subsequent EBSD analysis revealed the increased fraction of fine {10-12} twins and nucleation of multiple {10-12} twin variants caused by higher local stress concentration at the grain boundaries in CT-stretched samples as manifested by the kernel average misorientation. This higher twin fraction and twin-twin interaction imposed the strengthening by restricting the mean free path of dislocations, leading to higher flow stress and strain hardening rate. During annealing of the RT/CT-stretched samples, the residual strain energy and twin boundaries were decreased due to static recovery, leading to a coarse-grained twin-free microstructure. Strain induced boundary migration (SBIM) was found to be the predominant mechanism governing the grain growth during annealing via movement of high angle grain boundaries.Keywords: magnesium, twinning, twinning variant selection, EBSD, cryogenic deformation
Procedia PDF Downloads 67789 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 315788 The Role of Privatization as a Moderator of the Impact of Non-Institutional Factors on the Performance of the Enterprises in Central and Eastern Europe
Authors: Margerita Topalli
Abstract:
In this paper, we analyze the impact of corruption (business environment, informal payments and state capture), crime and tax time, on the enterprise's performance during economic transition in the Central and Eastern Europe and the role of privatization as a moderator. We examine this effect by comparing the performance of the privatized enterprises and the state-owned-enterprises, while controlling for various forms of selection bias. The present study is based on firm-level panel data collected by the BEEPS for 27 transition countries over 2002, 2005, 2007, and 2011. In addition to firm characteristics, BEEPS collects valuable survey information on different forms of corruption, crime, tax time and firm ownership. We estimate the impact of corruption, crime, tax time on the different performance measures (sales, productivity, employment, labor costs and material costs) of the enterprise, whereby we control for firm ownership, with a special focus on the role of the privatization as a moderator. It argues that in general terms, the privatization has positive effects on the performance of enterprises during transition, but these effects are significantly different, depending on the examined performance measure (sales, productivity, employment, labor costs and material costs). When the privatization is effective, the privatized enterprises show a considerable performance improvements, particularly in terms of revenue growth and productivity growth. It also argues that the effects of privatization are different depending on the types of owner (outsider or insider) to whom it gives control. The results show that privatization to insider owners has no significant performance effect.Keywords: effects of privatization, enterprise performance, state capture, corruption, firm ownership, economic transition, Central and Eastern Europe
Procedia PDF Downloads 321787 Developing a Spatial Transport Model to Determine Optimal Routes When Delivering Unprocessed Milk
Authors: Sunday Nanosi Ndovi, Patrick Albert Chikumba
Abstract:
In Malawi, smallholder dairy farmers transport unprocessed milk to sell at Milk Bulking Groups (MBGs). MBGs store and chill the milk while awaiting collection by processors. The farmers deliver milk using various modes of transportation such as foot, bicycle, and motorcycle. As a perishable food, milk requires timely transportation to avoid deterioration. In other instances, some farmers bypass the nearest MBGs for facilities located further away. Untimely delivery worsens quality and results in rejection at MBG. Subsequently, these rejections lead to revenue losses for dairy farmers. Therefore, the objective of this study was to optimize routes when transporting milk by selecting the shortest route using time as a cost attribute in Geographic Information Systems (GIS). A spatially organized transport system impedes milk deterioration while promoting profitability for dairy farmers. A transportation system was modeled using Route Analysis and Closest Facility network extensions. The final output was to find the quickest routes and identify the nearest milk facilities from incidents. Face-to-face interviews targeted leaders from all 48 MBGs in the study area and 50 farmers from Namahoya MBG. During field interviews, coordinates were captured in order to create maps. Subsequently, maps supported the selection of optimal routes based on the least travel times. The questionnaire targeted 200 respondents. Out of the total, 182 respondents were available. Findings showed that out of the 50 sampled farmers that supplied milk to Namahoya, only 8% were nearest to the facility, while 92% were closest to 9 different MBGs. Delivering milk to the nearest MBGs would minimize travel time and distance by 14.67 hours and 73.37 km, respectively.Keywords: closest facility, milk, route analysis, spatial transport
Procedia PDF Downloads 58786 The Importance of Artificial Intelligence in Various Healthcare Applications
Authors: Joshna Rani S., Ahmadi Banu
Abstract:
Artificial Intelligence (AI) has a significant task to carry out in the medical care contributions of things to come. As AI, it is the essential capacity behind the advancement of accuracy medication, generally consented to be a painfully required development in care. Albeit early endeavors at giving analysis and treatment proposals have demonstrated testing, we anticipate that AI will at last dominate that area too. Given the quick propels in AI for imaging examination, it appears to be likely that most radiology, what's more, pathology pictures will be inspected eventually by a machine. Discourse and text acknowledgment are now utilized for assignments like patient correspondence and catch of clinical notes, and their utilization will increment. The best test to AI in these medical services areas isn't regardless of whether the innovations will be sufficiently skilled to be valuable, but instead guaranteeing their appropriation in day by day clinical practice. For far reaching selection to happen, AI frameworks should be affirmed by controllers, coordinated with EHR frameworks, normalized to an adequate degree that comparative items work likewise, instructed to clinicians, paid for by open or private payer associations, and refreshed over the long haul in the field. These difficulties will, at last, be survived, yet they will take any longer to do as such than it will take for the actual innovations to develop. Therefore, we hope to see restricted utilization of AI in clinical practice inside 5 years and more broad use inside 10 years. It likewise appears to be progressively evident that AI frameworks won't supplant human clinicians for a huge scope, yet rather will increase their endeavors to really focus on patients. Over the long haul, human clinicians may advance toward errands and work plans that draw on remarkably human abilities like sympathy, influence, and higher perspective mix. Maybe the lone medical services suppliers who will chance their professions over the long run might be the individuals who will not work close by AIKeywords: artificial intellogence, health care, breast cancer, AI applications
Procedia PDF Downloads 181