Search results for: incidental information processing
11595 Management of Interdependence in Manufacturing Networks
Authors: Atour Taghipour
Abstract:
In the real world each manufacturing company is an independent business unit. These business units are linked to each other through upstream and downstream linkages. The management of these linkages is called coordination which, could be considered as a difficult engineering task. The degree of difficulty of coordination depends on the type and the nature of information exchanged between partners as well as the structure of relationship from mutual to the network structure. The literature of manufacturing systems comprises a wide range of varieties of methods and approaches of coordination. In fact, two main streams of research can be distinguished: central coordination versus decentralized coordination. In the centralized systems a high degree of information exchanges is required. The high degree of information exchanges sometimes leads to difficulties when independent members do not want to share information. In order to address these difficulties, decentralized approaches of coordination of operations planning decisions based on some minimal information sharing have been proposed in many academic disciplines. This paper first proposes a framework of analysis in order to analyze the proposed approaches in the literature, based on this framework which includes the similarities between approaches we categorize the existing approaches. This classification can be used as a research map for future researches. The result of our paper highlights several opportunities for future research. First, it is proposed to develop more dynamic and stochastic mechanisms of planning coordination of manufacturing units. Second, in order to exploit the complementarities of approaches proposed by diverse science discipline, we propose to integrate the techniques of coordination. Finally, based on our approach we proposed to develop coordination standards to guaranty both the complementarity of these approaches as well as the freedom of companies to adopt any planning tools.Keywords: network coordination, manufacturing, operations planning, supply chain
Procedia PDF Downloads 28211594 Deproteinization of Moroccan Sardine (Sardina pilchardus) Scales: A Pilot-Scale Study
Authors: F. Bellali, M. Kharroubi, Y. Rady, N. Bourhim
Abstract:
In Morocco, fish processing industry is an important source income for a large amount of by-products including skins, bones, heads, guts, and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Sardina plichardus scales from resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic, and biomedical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. And the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The advancement from lab scale to pilot scale is a critical stage in the technological development. In this study, the optimal condition for the deproteinization which was validated at laboratory scale was employed in the pilot scale procedure. The deproteinization of fish scale was then demonstrated on a pilot scale (2Kg scales, 20l NaOH), resulting in protein content (0,2mg/ml) and hydroxyproline content (2,11mg/l). These results indicated that the pilot-scale showed similar performances to those of lab-scale one.Keywords: deproteinization, pilot scale, scale, sardine pilchardus
Procedia PDF Downloads 44711593 The Evolution of the Israel Defence Forces’ Information Operations: A Case Study of the Israel Defence Forces' Activities in the Information Domain 2006–2014
Authors: Teemu Saressalo
Abstract:
This article examines the evolution of the Israel Defence Forces’ information operation activities during an eight-year timespan from the 2006 war with Hezbollah to more recent operations such as Pillar of Defence and Protective Edge. To this end, the case study will show a change in the Israel Defence Forces’ activities in the information domain. In the 2006 war with Hezbollah in Lebanon, Israel inflicted enormous damage on the Lebanese infrastructure, leaving more than 1,200 people dead and 4,400 injured. Casualties among Hezbollah, Israel’s main adversary, were estimated to range from 250 to 700 fighters. Damage to the Lebanese infrastructure was estimated at over USD 2.5bn, with almost 2,000 houses and buildings damaged and destroyed. Even this amount of destruction did not force Hezbollah to yield and while both sides were claiming victory in the war, Israel paid a heavier price in political backlashes and loss of reputation, mainly due to failures in the media and the way in which the war was portrayed and perceived in Israel and abroad. Much of this can be credited to Hezbollah’s efficient use of the media, and Israel’s failure to do so. Israel managed the next conflict it was engaged in completely differently – it had learnt its lessons and built up new ways to counter its adversary’s propaganda and media operations. In Operation Cast Lead at the turn of 2009, Hamas, Israel’s adversary and Gaza’s dominating faction, was not able to utilize the media in the same way that Hezbollah had. By creating a virtual and physical barrier around the Gaza Strip, Israel almost totally denied its adversary access to the worldwide media, and by restricting the movement of journalists in the area, Israel could let its voice be heard above all. The operation Cast Lead began with a deception operation, which caught Hamas totally off guard. The 21-day campaign left the Gaza Strip devastated, but did not cause as much protest in Israel during the operation as the 2006 war did, mainly due to almost total Israeli dominance in the information dimension. The most important outcome from the Israeli perspective was the fact that Operation Cast Lead was assessed to be a success and the operation enjoyed domestic support along with support from many western nations, which had condemned Israeli actions in the 2006 war. Later conflicts have shown the same tendency towards virtually total dominance in the information domain, which has had an impact on target audiences across the world. Thus, it is clear that well-planned and conducted information operations are able to shape public opinion and influence decision-makers, although Israel might have been outpaced by its rivals.Keywords: Hamas, Hezbollah, information operations, Israel Defence Forces
Procedia PDF Downloads 23711592 Abandoning 'One-Time' Optional Information Literacy Workshops for Year 1 Medical Students and Gearing towards an 'Embedded Librarianship' Approach
Authors: R. L. David, E. C. P. Tan, M. A. Ferenczi
Abstract:
This study aimed to investigate the effect of a 'one-time' optional Information Literacy (IL) workshop to enhance Year 1 medical students' literature search, writing, and citation management skills as directed by a customized five-year IL framework developed for LKC Medicine students. At the end of the IL workshop, the overall rated 'somewhat difficult' when finding, citing, and using information from sources. The study method is experimental using a standardized IL test to study the cohort effect of a 'one-time' optional IL workshop on Year 1 students; experimental group in comparison to Year 2 students; control group. Test scores from both groups were compared and analyzed using mean scores and one-way analysis of variance (ANOVA). Unexpectedly, there were no statistically significant differences between group means as determined by One-Way ANOVA (F₁,₁₉₃ = 3.37, p = 0.068, ηp² = 0.017). Challenges and shortfalls posed by 'one-time' interventions raised a rich discussion to adopt an 'embedded librarianship' approach, which shifts the medial librarians' role into the curriculum and uses Team Based Learning to teach IL skills to medical students. The customized five-year IL framework developed for LKC Medicine students becomes a useful librarian-faculty model for embedding and bringing IL into the classroom.Keywords: information literacy, 'one-time' interventions, medical students, standardized tests, embedded librarianship, curriculum, medical librarians
Procedia PDF Downloads 11311591 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu
Authors: Ammarah Irum, Muhammad Ali Tahir
Abstract:
Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language
Procedia PDF Downloads 7211590 Geospatial Information for Smart City Development
Authors: Simangele Dlamini
Abstract:
Smart city development is seen as a way of facing the challenges brought about by the growing urban population the world over. Research indicates that cities have a role to play in combating urban challenges like crime, waste disposal, greenhouse gas emissions, and resource efficiency. These solutions should be such that they do not make city management less sustainable but should be solutions-driven, cost and resource-efficient, and smart. This study explores opportunities on how the City of Johannesburg, South Africa, can use Geographic Information Systems, Big Data and the Internet of Things (IoT) in identifying opportune areas to initiate smart city initiatives such as smart safety, smart utilities, smart mobility, and smart infrastructure in an integrated manner. The study will combine Big Data, using real-time data sources to identify hotspot areas that will benefit from ICT interventions. The GIS intervention will assist the city in avoiding a silo approach in its smart city development initiatives, an approach that has led to the failure of smart city development in other countries.Keywords: smart cities, internet of things, geographic information systems, johannesburg
Procedia PDF Downloads 14911589 Quantifying Stability of Online Communities and Its Impact on Disinformation
Authors: Victor Chomel, Maziyar Panahi, David Chavalarias
Abstract:
Misinformation has taken an increasingly worrying place in social media. Propagation patterns are closely linked to the structure of communities. This study proposes a method of community analysis based on a combination of centrality indicators for the network and its main communities. The objective is to establish a link between the stability of the communities over time, the social ascension of its members internally, and the propagation of information in the community. To this end, data from the debates about global warming and political communities on Twitter have been collected, and several tens of millions of tweets and retweets have helped us better understand the structure of these communities. The quantification of this stability allows for the study of the propagation of information of any kind, including disinformation. Our results indicate that the most stable communities over time are the ones that enable the establishment of nodes capturing a large part of the information and broadcasting its opinions. Conversely, communities with a high turnover and social ascendancy only stabilize themselves strongly in the face of adversity and external events but seem to offer a greater diversity of opinions most of the time.Keywords: community analysis, disinformation, misinformation, Twitter
Procedia PDF Downloads 14011588 Development of Innovative Islamic Web Applications
Authors: Farrukh Shahzad
Abstract:
The rich Islamic resources related to religious text, Islamic sciences, and history are widely available in print and in electronic format online. However, most of these works are only available in Arabic language. In this research, an attempt is made to utilize these resources to create interactive web applications in Arabic, English and other languages. The system utilizes the Pattern Recognition, Knowledge Management, Data Mining, Information Retrieval and Management, Indexing, storage and data-analysis techniques to parse, store, convert and manage the information from authentic Arabic resources. These interactive web Apps provide smart multi-lingual search, tree based search, on-demand information matching and linking. In this paper, we provide details of application architecture, design, implementation and technologies employed. We also presented the summary of web applications already developed. We have also included some screen shots from the corresponding web sites. These web applications provide an Innovative On-line Learning Systems (eLearning and computer based education).Keywords: Islamic resources, Muslim scholars, hadith, narrators, history, fiqh
Procedia PDF Downloads 28311587 StockTwits Sentiment Analysis on Stock Price Prediction
Authors: Min Chen, Rubi Gupta
Abstract:
Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing
Procedia PDF Downloads 15611586 Detecting Indigenous Languages: A System for Maya Text Profiling and Machine Learning Classification Techniques
Authors: Alejandro Molina-Villegas, Silvia Fernández-Sabido, Eduardo Mendoza-Vargas, Fátima Miranda-Pestaña
Abstract:
The automatic detection of indigenous languages in digital texts is essential to promote their inclusion in digital media. Underrepresented languages, such as Maya, are often excluded from language detection tools like Google’s language-detection library, LANGDETECT. This study addresses these limitations by developing a hybrid language detection solution that accurately distinguishes Maya (YUA) from Spanish (ES). Two strategies are employed: the first focuses on creating a profile for the Maya language within the LANGDETECT library, while the second involves training a Naive Bayes classification model with two categories, YUA and ES. The process includes comprehensive data preprocessing steps, such as cleaning, normalization, tokenization, and n-gram counting, applied to text samples collected from various sources, including articles from La Jornada Maya, a major newspaper in Mexico and the only media outlet that includes a Maya section. After the training phase, a portion of the data is used to create the YUA profile within LANGDETECT, which achieves an accuracy rate above 95% in identifying the Maya language during testing. Additionally, the Naive Bayes classifier, trained and tested on the same database, achieves an accuracy close to 98% in distinguishing between Maya and Spanish, with further validation through F1 score, recall, and logarithmic scoring, without signs of overfitting. This strategy, which combines the LANGDETECT profile with a Naive Bayes model, highlights an adaptable framework that can be extended to other underrepresented languages in future research. This fills a gap in Natural Language Processing and supports the preservation and revitalization of these languages.Keywords: indigenous languages, language detection, Maya language, Naive Bayes classifier, natural language processing, low-resource languages
Procedia PDF Downloads 1811585 Teaching Material, Books, Publications versus the Practice: Myths and Truths about Installation and Use of Downhole Safety Valve
Authors: Robson da Cunha Santos, Caio Cezar R. Bonifacio, Diego Mureb Quesada, Gerson Gomes Cunha
Abstract:
The paper is related to the safety of oil wells and environmental preservation on the planet, because they require great attention and commitment from oil companies and people who work with these equipments. This must occur from drilling the well until it is abandoned in order to safeguard the environment and prevent possible damage. The project had as main objective the constitution resulting from comparatives made among books, articles and publications with information gathered in technical visits to operational bases of Petrobras. After the visits, the information from methods of utilization and present managements, which were not available before, became available to the general audience. As a result, it is observed a huge flux of incorrect and out-of-date information that comprehends not only bibliographic archives, but also academic resources and materials. During the gathering of more in-depth information on the manufacturing, assembling, and use aspects of DHSVs, several issues that were previously known as correct, customary issues were discovered to be uncertain and outdated. Information of great importance resulted in affirmations about subjects as the depth of the valve installation that was before installed to 30 meters from the seabed (mud line). Despite this, the installation should vary in conformity to the ideal depth to escape from area with the biggest tendency to hydrates formation according to the temperature and pressure. Regarding to valves with nitrogen chamber, in accordance with books, they have their utilization linked to water line ≥ 700 meters, but in Brazilian exploratory fields, their use occurs from 600 meters of water line. The valves used in Brazilian fields are able to be inserted to the production column and self-equalizing, but the use of screwed valve in the column of production and equalizing is predominant. Although these valves are more expensive to acquire, they are more reliable, efficient, with a bigger shelf life and they do not cause restriction to the fluid flux. It follows that based on researches and theoretical information confronted to usual forms used in fields, the present project is important and relevant. This project will be used as source of actualization and information equalization that connects academic environment and real situations in exploratory situations and also taking into consideration the enrichment of precise and easy to understand information to future researches and academic upgrading.Keywords: down hole safety valve, security devices, installation, oil-wells
Procedia PDF Downloads 27111584 Accessibility Assessment of School Facilities Using Geospatial Technologies: A Case Study of District Sheikhupura
Authors: Hira Jabbar
Abstract:
Education is vital for inclusive growth of an economy and a critical contributor for investment in human capital. Like other developing countries, Pakistan is facing enormous challenges regarding the provision of public facilities, improper infrastructure planning, accelerating rate of population and poor accessibility. The influence of the rapid advancement and innovations in GIS and RS techniques have proved to be a useful tool for better planning and decision making to encounter these challenges. Therefore present study incorporates GIS and RS techniques to investigate the spatial distribution of school facilities, identifies settlements with served and unserved population, finds potential areas for new schools based on population and develops an accessibility index to evaluate the higher accessibility for schools. For this purpose high-resolution worldview imagery was used to develop road network, settlements and school facilities and to generate school accessibility for each level. Landsat 8 imagery was utilized to extract built-up area by applying pre and post-processing models and Landscan 2015 was used to analyze population statistics. Service area analysis was performed using network analyst extension in ArcGIS 10.3v and results were evaluated for served and underserved areas and population. An accessibility tool was used to evaluate a set of potential destinations to determine which is the most accessible with the given population distribution. Findings of the study may contribute to facilitating the town planners and education authorities for understanding the existing patterns of school facilities. It is concluded that GIS and remote sensing can be effectively used in urban transport and facility planning.Keywords: accessibility, geographic information system, landscan, worldview
Procedia PDF Downloads 32511583 Paradigm Shift of Leadership: Leaders in Information Technology
Authors: Mustafa Hyder, Khalid Mahmood Iraqi, Sameen Mustafa
Abstract:
They say if the leader limps, all the others will start limping too. Therefore, a very dynamic leadership at all levels within the IT Community is critical to the success of an organization. This paper is an attempt to study the relationship between Information Technology (IT) with leadership and assesses its relevancy in today's fast-paced hi-tech globalized environment. The paper strives to look into the essential qualities and knowledge as needed by today's IT leader, in contrast to essential characteristics common to all the leaders-past, present, and future.Keywords: leadership, autocratic leaders, characteristics of IT leaders, skills of IT professionals, IT leadership
Procedia PDF Downloads 35011582 A Computer-Aided System for Tooth Shade Matching
Authors: Zuhal Kurt, Meral Kurt, Bilge T. Bal, Kemal Ozkan
Abstract:
Shade matching and reproduction is the most important element of success in prosthetic dentistry. Until recently, shade matching procedure was implemented by dentists visual perception with the help of shade guides. Since many factors influence visual perception; tooth shade matching using visual devices (shade guides) is highly subjective and inconsistent. Subjective nature of this process has lead to the development of instrumental devices. Nowadays, colorimeters, spectrophotometers, spectroradiometers and digital image analysing systems are used for instrumental shade selection. Instrumental devices have advantages that readings are quantifiable, can obtain more rapidly and simply, objectively and precisely. However, these devices have noticeable drawbacks. For example, translucent structure and irregular surfaces of teeth lead to defects on measurement with these devices. Also between the results acquired by devices with different measurement principles may make inconsistencies. So, its obligatory to search for new methods for dental shade matching process. A computer-aided system device; digital camera has developed rapidly upon today. Currently, advances in image processing and computing have resulted in the extensive use of digital cameras for color imaging. This procedure has a much cheaper process than the use of traditional contact-type color measurement devices. Digital cameras can be taken by the place of contact-type instruments for shade selection and overcome their disadvantages. Images taken from teeth show morphology and color texture of teeth. In last decades, a new method was recommended to compare the color of shade tabs taken by a digital camera using color features. This method showed that visual and computer-aided shade matching systems should be used as concatenated. Recently using methods of feature extraction techniques are based on shape description and not used color information. However, color is mostly experienced as an essential property in depicting and extracting features from objects in the world around us. When local feature descriptors with color information are extended by concatenating color descriptor with the shape descriptor, that descriptor will be effective on visual object recognition and classification task. Therefore, the color descriptor is to be used in combination with a shape descriptor it does not need to contain any spatial information, which leads us to use local histograms. This local color histogram method is remain reliable under variation of photometric changes, geometrical changes and variation of image quality. So, coloring local feature extraction methods are used to extract features, and also the Scale Invariant Feature Transform (SIFT) descriptor used to for shape description in the proposed method. After the combination of these descriptors, the state-of-art descriptor named by Color-SIFT will be used in this study. Finally, the image feature vectors obtained from quantization algorithm are fed to classifiers such as Nearest Neighbor (KNN), Naive Bayes or Support Vector Machines (SVM) to determine label(s) of the visual object category or matching. In this study, SVM are used as classifiers for color determination and shade matching. Finally, experimental results of this method will be compared with other recent studies. It is concluded from the study that the proposed method is remarkable development on computer aided tooth shade determination system.Keywords: classifiers, color determination, computer-aided system, tooth shade matching, feature extraction
Procedia PDF Downloads 44511581 Annotation Ontology for Semantic Web Development
Authors: Hadeel Al Obaidy, Amani Al Heela
Abstract:
The main purpose of this paper is to examine the concept of semantic web and the role that ontology and semantic annotation plays in the development of semantic web services. The paper focuses on semantic web infrastructure illustrating how ontology and annotation work to provide the learning capabilities for building content semantically. To improve productivity and quality of software, the paper applies approaches, notations and techniques offered by software engineering. It proposes a conceptual model to develop semantic web services for the infrastructure of web information retrieval system of digital libraries. The developed system uses ontology and annotation to build a knowledge based system to define and link the meaning of a web content to retrieve information for users’ queries. The results are more relevant through keywords and ontology rule expansion that will be more accurate to satisfy the requested information. The level of results accuracy would be enhanced since the query semantically analyzed work with the conceptual architecture of the proposed system.Keywords: semantic web services, software engineering, semantic library, knowledge representation, ontology
Procedia PDF Downloads 17311580 Patient Tracking Challenges During Disasters and Emergencies
Authors: Mohammad H. Yarmohammadian, Reza Safdari, Mahmoud Keyvanara, Nahid Tavakoli
Abstract:
One of the greatest challenges in disaster and emergencies is patient tracking. The concept of tracking has different denotations. One of the meanings refers to tracking patients’ physical locations and the other meaning refers to tracking patients ‘medical needs during emergency services. The main goal of patient tracking is to provide patient safety during disaster and emergencies and manage the flow of patient and information in different locations. In most of cases, there are not sufficient and accurate data regarding the number of injuries, medical conditions and their accommodation and transference. The objective of the present study is to survey on patient tracking issue in natural disaster and emergencies. Methods: This was a narrative study in which the population was E-Journals and the electronic database such as PubMed, Proquest, Science direct, Elsevier, etc. Data was gathered by Extraction Form. All data were analyzed via content analysis. Results: In many countries there is no appropriate and rapid method for tracking patients and transferring victims after the occurrence of incidents. The absence of reliable data of patients’ transference and accommodation, even in the initial hours and days after the occurrence of disasters, and coordination for appropriate resource allocation, have faced challenges for evaluating needs and services challenges. Currently, most of emergency services are based on paper systems, while these systems do not act appropriately in great disasters and incidents and this issue causes information loss. Conclusion: Patient tracking system should update the location of patients or evacuees and information related to their states. Patients’ information should be accessible for authorized users to continue their treatment, accommodation and transference. Also it should include timely information of patients’ location as soon as they arrive somewhere and leave therein such a way that health care professionals can be able to provide patients’ proper medical treatment.Keywords: patient tracking, challenges, disaster, emergency
Procedia PDF Downloads 30411579 Additive Manufacturing – Application to Next Generation Structured Packing (SpiroPak)
Authors: Biao Sun, Tejas Bhatelia, Vishnu Pareek, Ranjeet Utikar, Moses Tadé
Abstract:
Additive manufacturing (AM), commonly known as 3D printing, with the continuing advances in parallel processing and computational modeling, has created a paradigm shift (with significant radical thinking) in the design and operation of chemical processing plants, especially LNG plants. With the rising energy demands, environmental pressures, and economic challenges, there is a continuing industrial need for disruptive technologies such as AM, which possess capabilities that can drastically reduce the cost of manufacturing and operations of chemical processing plants in the future. However, the continuing challenge for 3D printing is its lack of adaptability in re-designing the process plant equipment coupled with the non-existent theory or models that could assist in selecting the optimal candidates out of the countless potential fabrications that are possible using AM. One of the most common packings used in the LNG process is structured packing in the packed column (which is a unit operation) in the process. In this work, we present an example of an optimum strategy for the application of AM to this important unit operation. Packed columns use a packing material through which the gas phase passes and comes into contact with the liquid phase flowing over the packing, typically performing the necessary mass transfer to enrich the products, etc. Structured packing consists of stacks of corrugated sheets, typically inclined between 40-70° from the plane. Computational Fluid Dynamics (CFD) was used to test and model various geometries to study the governing hydrodynamic characteristics. The results demonstrate that the costly iterative experimental process can be minimized. Furthermore, they also improve the understanding of the fundamental physics of the system at the multiscale level. SpiroPak, patented by Curtin University, represents an innovative structured packing solution currently at a technology readiness level (TRL) of 5~6. This packing exhibits remarkable characteristics, offering a substantial increase in surface area while significantly enhancing hydrodynamic and mass transfer performance. Recent studies have revealed that SpiroPak can reduce pressure drop by 50~70% compared to commonly used commercial packings, and it can achieve 20~50% greater mass transfer efficiency (particularly in CO2 absorption applications). The implementation of SpiroPak has the potential to reduce the overall size of columns and decrease power consumption, resulting in cost savings for both capital expenditure (CAPEX) and operational expenditure (OPEX) when applied to retrofitting existing systems or incorporated into new processes. Furthermore, pilot to large-scale tests is currently underway to further advance and refine this technology.Keywords: Additive Manufacturing (AM), 3D printing, Computational Fluid Dynamics (CFD, structured packing (SpiroPak)
Procedia PDF Downloads 8811578 IT/IS Organisation Design in the Digital Age: A Literature Review
Authors: Dominik Krimpmann
Abstract:
Information technology and information systems are currently at a tipping point. The digital age fundamentally transforms a large number of industries in the ways they work. Lines between business and technology blur. Researchers have acknowledged that this is the time in which the IT/IS organisation needs to re-strategise itself. In this paper, the author provides a structured review of the IS and organisation design literature addressing the question of how the digital age changes the design categories of an IT/IS organisation design. The findings show that most papers just analyse single aspects of either IT/IS relevant information or generic organisation design elements but miss a holistic ‘big-picture’ onto an IT/IS organisation design. This paper creates a holistic IT/IS organisation design framework bringing together the IS research strand, the digital strand and the generic organisation design strand. The research identified four IT/IS organisation design categories (strategy, structure, processes and people) and discusses the importance of two additional categories (sourcing and governance). The authors findings point to a first anchor point from which further research needs to be conducted to develop a holistic IT/IS organisation design framework.Keywords: IT/IS strategy, IT/IS organisation design, digital age, organisational effectiveness, literature review
Procedia PDF Downloads 40911577 A Three-modal Authentication Method for Industrial Robots
Authors: Luo Jiaoyang, Yu Hongyang
Abstract:
In this paper, we explore a method that can be used in the working scene of intelligent industrial robots to confirm the identity information of operators to ensure that the robot executes instructions in a sufficiently safe environment. This approach uses three information modalities, namely visible light, depth, and sound. We explored a variety of fusion modes for the three modalities and finally used the joint feature learning method to improve the performance of the model in the case of noise compared with the single-modal case, making the maximum noise in the experiment. It can also maintain an accuracy rate of more than 90%.Keywords: multimodal, kinect, machine learning, distance image
Procedia PDF Downloads 7911576 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study
Authors: Kaaryn Cater
Abstract:
People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity
Procedia PDF Downloads 6511575 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 3311574 The Voluntary Review Decision of Quarterly Consolidated Financial Statements in Emerging Market: Evidence from Taiwan
Authors: Shuofen Hsu, Ya-Yi Chao, Chao-Wei Li
Abstract:
This paper investigates the factors of whether firms’ quarterly consolidated financial statements to be voluntary reviewed by auditor. To promote the information transparency, the Financial Supervisory Commission of Executive Yuan in Taiwan ruled the Taiwanese listed companies should announce the first and third quarterly consolidated financial statements since 2008 to 2012, while the Commission didn’t require the consolidated financial statements should be reviewed by auditors. This is a very special practice in emerging market, especially in Taiwan. The valuable data of this period is suitable for us to research the determinants of firms’ voluntary review decision in emerging markets. We collected the auditors' report of each company and each year of Taiwanese listed companies since 2008 to 2012 for our research samples. We use probit model to test and analyze the determinants of voluntary review decision of the first and third quarterly consolidated financial statements. Our empirical result shows that the firms whose first and third quarterly consolidated financial statements are voluntary to be reviewed by auditors have better ranking of information transparency, higher audit quality, and better corporate governance, suggesting that voluntary review is a good signal to firms’ better information and corporate governance quality.Keywords: voluntary review, information transparency, audit quality, quarterly consolidated financial statements
Procedia PDF Downloads 25311573 Physicochemical Properties and Thermal Inactivation of Polyphenol Oxidase of African Bush Mango (Irvingia Gabonensis) Fruit
Authors: Catherine Joke Adeseko
Abstract:
Enzymatic browning is an economically important disorder that degrades organoleptic properties and prevent the consumer from purchasing fresh fruit and vegetables. Prevention and control of enzymatic browning in fruit and its product is imperative. Therefore, this study sought to investigate the catalytic effect of polyphenol oxidase (PPO) in the adverse browning of African bush mango (Irvingia gabonensis) fruit peel and pulp. PPO was isolated and purified, and its physicochemical properties, such as the effect of pH with SDS, temperature, and thermodynamic studies, which invariably led to thermal inactivation of purified PPO at 80 °C, were evaluated. The pH and temperature optima of PPO were found at 7.0 and 50, respectively. There was a gradual increase in the activity of PPO as the pH increases. However, the enzyme exhibited a higher activity at neutral pH 7.0, while enzymatic inhibition was observed at acidic region, pH 2.0. The presence of SDS at pH 5.0 downward was found to inhibit the activity of PPO from the peel and pulp of I. gabonensis. The average value of enthalpy (ΔH), entropy (ΔS), and Gibbs free energy (ΔG) obtained at 20 min of incubation and temperature 30 – 80 °C were respectively 39.93 kJ.mol-1, 431.57 J.mol-1 .K-1 and -107.99 kJ.mol-1 for peel PPO, and 37.92 kJ.mol-1, -442.51J.mol-1.K-1, and -107.22 kJ.mol-1 for pulp PPO. Thermal inactivation of PPO from I. gabonensis exhibited a reduction in catalytic activity as the temperature and duration of heat inactivation increases using catechol, reflected by an increment in k value. The half-life of PPO (t1/2) decreases as the incubation temperature increases due to the instability of the enzyme at high temperatures and was higher in pulp than peel. Both D and Z values decrease with increase in temperature. The information from this study suggests processing parameters for controlling PPO in the potential industrial application of I. gabonensis fruit in order to prolong the shelf-life of this fruit for maximum utilization.Keywords: enzymatic, browning, characterization, activity
Procedia PDF Downloads 9211572 Cognitive Rehabilitation in Schizophrenia: A Review of the Indian Scenario
Authors: Garima Joshi, Pratap Sharan, V. Sreenivas, Nand Kumar, Kameshwar Prasad, Ashima N. Wadhawan
Abstract:
Schizophrenia is a debilitating disorder and is marked by cognitive impairment, which deleteriously impacts the social and professional functioning along with the quality of life of the patients and the caregivers. Often the cognitive symptoms are in their prodromal state and worsen as the illness progresses; they have proven to have a good predictive value for the prognosis of the illness. It has been shown that intensive cognitive rehabilitation (CR) leads to improvements in the healthy as well as cognitively-impaired subjects. As the majority of population in India falls in the lower to middle socio-economic status and have low education levels, using the existing packages, a majority of which are developed in the West, for cognitive rehabilitation becomes difficult. The use of technology is also restricted due to the high costs involved and the limited availability and familiarity with computers and other devices, which pose as an impedance for continued therapy. Cognitive rehabilitation in India uses a plethora of retraining methods for the patients with schizophrenia targeting the functions of attention, information processing, executive functions, learning and memory, and comprehension along with Social Cognition. Psychologists often have to follow an integrative therapy approach involving social skills training, family therapy and psychoeducation in order to maintain the gains from the cognitive rehabilitation in the long run. This paper reviews the methodologies and cognitive retaining programs used in India. It attempts to elucidate the evolution and development of methodologies used, from traditional paper-pencil based retraining to more sophisticated neuroscience-informed techniques in cognitive rehabilitation of deficits in schizophrenia as home-based or supervised and guided programs for cognitive rehabilitation.Keywords: schizophrenia, cognitive rehabilitation, neuropsychological interventions, integrated approached to rehabilitation
Procedia PDF Downloads 36311571 Controlling Drone Flight Missions through Natural Language Processors Using Artificial Intelligence
Authors: Sylvester Akpah, Selasi Vondee
Abstract:
Unmanned Aerial Vehicles (UAV) as they are also known, drones have attracted increasing attention in recent years due to their ubiquitous nature and boundless applications in the areas of communication, surveying, aerial photography, weather forecasting, medical delivery, surveillance amongst others. Operated remotely in real-time or pre-programmed, drones can fly autonomously or on pre-defined routes. The application of these aerial vehicles has successfully penetrated the world due to technological evolution, thus a lot more businesses are utilizing their capabilities. Unfortunately, while drones are replete with the benefits stated supra, they are riddled with some problems, mainly attributed to the complexities in learning how to master drone flights, collision avoidance and enterprise security. Additional challenges, such as the analysis of flight data recorded by sensors attached to the drone may take time and require expert help to analyse and understand. This paper presents an autonomous drone control system using a chatbot. The system allows for easy control of drones using conversations with the aid of Natural Language Processing, thus to reduce the workload needed to set up, deploy, control, and monitor drone flight missions. The results obtained at the end of the study revealed that the drone connected to the chatbot was able to initiate flight missions with just text and voice commands, enable conversation and give real-time feedback from data and requests made to the chatbot. The results further revealed that the system was able to process natural language and produced human-like conversational abilities using Artificial Intelligence (Natural Language Understanding). It is recommended that radio signal adapters be used instead of wireless connections thus to increase the range of communication with the aerial vehicle.Keywords: artificial ntelligence, chatbot, natural language processing, unmanned aerial vehicle
Procedia PDF Downloads 14211570 Team-Theatre as a Tool of Occupational Safety Awareness
Authors: Fiorenza Misale
Abstract:
The painful phenomenon of so-called white deaths and accidents at work, unfortunately, is always current. The key is to act on the culture of security through effective measures of attitudes and behaviors that go far beyond the knowledge and the know-how. It is necessary that there is an ‘introjection’ of safety culture through the conscious involvement of all workers. The legislation on work safety identifies the main tool to promote the culture of safety at work and prevention within the workplace. In law the term education is used to distinguish itself from the information with which they will simply theoretically transmit, and from the training with which they will provide the practical skills. The new decree fact fills several gaps in previous legislation and stresses the importance of training in the workplace, that is, the main activity through which it is possible to achieve the active participation of all workers in the company’s prevention system. This system is built only through the dissemination of risk information, the circulation of information, comparison and dialogue between all actors involved that are the necessary elements for a correct transmission of the culture of worker safety. Training activity should put the focus on work experience in order to bring out all the knowledge needed to identify and assess the risks in the work place, and especially the action to eliminate or control them, integrating, when necessary, the missing knowledge. In addition to traditional training and information systems can be utilized for the purpose of training that are able to affect both one emotionally and aesthetically, team-theatre is one of them. Among the methods of company theater that can be used in work safety we have: Lesson show, theater workshop, improvised theater, forum theater, theater playback. The theater can represent a complementary approach to traditional training and give information on safety measures, demonstrating that there are more engaging outreach tools. Team-theatre allows identification with the characters, a transmission of emotions and moods and it is through the staging of a story that the individual processes new information. It’ also s a means of experiential training that allows you to work with your mind, body, emotions.The aim of one work is the use of corporate theater on the personnel working in the health sector. Through a questionnaire we are able to analyze the knowledge of occupational safety and current risks; in particular in health care which is to be administered before and after the play.Keywords: theater, training, occupational health, safety
Procedia PDF Downloads 27111569 A Study on the Readers' Motivation and Satisfaction on Sports Newspaper in Vietnam
Authors: Trang Huyen Nguyen, Thien Tri Huynh
Abstract:
The objectives of this paper were to determine demographics of readers at Hochiminh city (HCMC), study reading motivation which affected citizens to read sports newspapers and measure readers’ satisfaction on issues related sports newspapers. Subjects of this survey were HCMC’s citizens. After collecting data, there were 568 useful feedbacks (the rate of response was 94.7%). The data analysis in the study included descriptive statistics and inferred statistics by SPSS 16.0 program for the research questions. The majority of respondents were male, from 24 to 32 years old, got the first degree and earned monthly from $US 150 to 300. Moreover, they were government officials and read newspaper from 11 to 20 times per month, bought newspapers by themselves. Finding information to predict results of sports matches was the highest motive affected readers; and the diversity information was the most pleasure that readers felt about sports newspapers. According to research findings, the board of editors could use the worthy information to make a strategic plan for newspaper on contents as well as design to meet the increasing demands of readers.Keywords: motivation, satisfaction, readers, sports newspapers
Procedia PDF Downloads 30411568 Processing, Nutritional Assessment and Sensory Evaluation of Bakery Products Prepared from Orange Fleshed Sweet Potatoes (OFSP) and Wheat Composite Flours
Authors: Hategekimana Jean Paul, Irakoze Josiane, Ishimweyizerwe Valentin, Iradukunda Dieudonne, Uwanyirigira Jeannette
Abstract:
Orange fleshed sweet potatoes (OFSP) are highly grown and are available plenty in rural and urban local markets and its contribution in reduction of food insecurity in Rwanda is considerable. But the postharvest loss of this commodity is a critical challenge due to its high perishability. Several research activities have been conducted on how fresh food commodities can be transformed into extended shelf life food products for prevention of post-harvest losses. However, such activity was not yet well studied in Rwanda. The aim of the present study was the processing of backed products from (OFSP)combined with wheat composite flour and assess the nutritional content and consumer acceptability of new developed products. The perishability of OFSP and their related lack during off season can be eradicated by producing cake, doughnut and bread with OFSP puree or flour. The processing for doughnut and bread were made by making OFSP puree and other ingredients then a dough was made followed by frying and baking while for cake OFSP was dried through solar dryer to have a flour together with wheat flour and other ingredients to make dough cake and baking. For each product, one control and three experimental samples, (three products in three different ratios (30,40 and50%) of OFSP and the remaining percentage of wheat flour) were prepared. All samples including the control were analyzed for the consumer acceptability (sensory attributes). Most preferred samples (One sample for each product with its control sample and for each OFSP variety) were analyzed for nutritional composition along with control sample. The Cake from Terimbere variety and Bread from Gihingumukungu supplemented with 50% OFSP flour or Puree respectively were most acceptable except Doughnut from Vita variety which was highly accepted at 50% of OFSP supplementation. The moisture, ash, protein, fat, fiber, Total carbohydrate, Vitamin C, reducing sugar and minerals (Sodium, Potassium and Phosphorus.) content was different among products. Cake was rich in fibers (14.71%), protein (6.590%), and vitamin c(19.988mg/100g) compared to other samples while bread found to be rich in reducing sugar with 12.71mg/100g compared to cake and doughnut. Also doughnut was found to be rich in fat content with 6.89% compared to other samples. For sensory analysis, doughnut was highly accepted in ratio of 60:40 compared to other products while cake was least accepted at ratio of 50:50. The Proximate composition and minerals content of all the OFSP products were significantly higher as compared to the control samples.Keywords: post-harvest loss, OFSP products, wheat flour, sensory evaluation, proximate composition
Procedia PDF Downloads 6211567 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 21911566 Application of Compressed Sensing Method for Compression of Quantum Data
Authors: M. Kowalski, M. Życzkowski, M. Karol
Abstract:
Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.Keywords: quantum key distribution systems, fiber optic system, compressed sensing
Procedia PDF Downloads 695